Dec 11 09:53:46 crc systemd[1]: Starting Kubernetes Kubelet... Dec 11 09:53:46 crc restorecon[4676]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:46 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:47 crc restorecon[4676]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 11 09:53:47 crc restorecon[4676]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 11 09:53:47 crc kubenswrapper[4746]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 09:53:47 crc kubenswrapper[4746]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 11 09:53:47 crc kubenswrapper[4746]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 09:53:47 crc kubenswrapper[4746]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 09:53:47 crc kubenswrapper[4746]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 11 09:53:47 crc kubenswrapper[4746]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.438887 4746 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442288 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442308 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442313 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442318 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442322 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442326 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442331 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442337 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442347 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442351 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442355 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442359 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442362 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442366 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442370 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442373 4746 feature_gate.go:330] unrecognized feature gate: Example Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442377 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442382 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442387 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442391 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442395 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442399 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442402 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442406 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442410 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442413 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442417 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442422 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442426 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442430 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442435 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442440 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442446 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442450 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442455 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442459 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442463 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442467 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442471 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442477 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442481 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442487 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442493 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442498 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442504 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442508 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442512 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442517 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442521 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442525 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442529 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442533 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442537 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442541 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442545 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442549 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442553 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442557 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442562 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442566 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442570 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442574 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442578 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442582 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442587 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442591 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442596 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442600 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442605 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442609 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.442613 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442865 4746 flags.go:64] FLAG: --address="0.0.0.0" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442877 4746 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442886 4746 flags.go:64] FLAG: --anonymous-auth="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442892 4746 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442898 4746 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442902 4746 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442909 4746 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442914 4746 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442919 4746 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442924 4746 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442928 4746 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442932 4746 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442937 4746 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442941 4746 flags.go:64] FLAG: --cgroup-root="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442945 4746 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442949 4746 flags.go:64] FLAG: --client-ca-file="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442953 4746 flags.go:64] FLAG: --cloud-config="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442957 4746 flags.go:64] FLAG: --cloud-provider="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442961 4746 flags.go:64] FLAG: --cluster-dns="[]" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442966 4746 flags.go:64] FLAG: --cluster-domain="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442970 4746 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442974 4746 flags.go:64] FLAG: --config-dir="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442978 4746 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442982 4746 flags.go:64] FLAG: --container-log-max-files="5" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442987 4746 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442991 4746 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.442995 4746 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443000 4746 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443004 4746 flags.go:64] FLAG: --contention-profiling="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443008 4746 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443012 4746 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443016 4746 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443020 4746 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443025 4746 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443031 4746 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443035 4746 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443039 4746 flags.go:64] FLAG: --enable-load-reader="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443059 4746 flags.go:64] FLAG: --enable-server="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443064 4746 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443069 4746 flags.go:64] FLAG: --event-burst="100" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443074 4746 flags.go:64] FLAG: --event-qps="50" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443078 4746 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443082 4746 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443086 4746 flags.go:64] FLAG: --eviction-hard="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443091 4746 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443096 4746 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443099 4746 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443104 4746 flags.go:64] FLAG: --eviction-soft="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443108 4746 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443112 4746 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443117 4746 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443121 4746 flags.go:64] FLAG: --experimental-mounter-path="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443125 4746 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443129 4746 flags.go:64] FLAG: --fail-swap-on="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443133 4746 flags.go:64] FLAG: --feature-gates="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443138 4746 flags.go:64] FLAG: --file-check-frequency="20s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443142 4746 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443146 4746 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443150 4746 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443154 4746 flags.go:64] FLAG: --healthz-port="10248" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443158 4746 flags.go:64] FLAG: --help="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443162 4746 flags.go:64] FLAG: --hostname-override="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443166 4746 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443171 4746 flags.go:64] FLAG: --http-check-frequency="20s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443175 4746 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443179 4746 flags.go:64] FLAG: --image-credential-provider-config="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443184 4746 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443188 4746 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443193 4746 flags.go:64] FLAG: --image-service-endpoint="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443196 4746 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443200 4746 flags.go:64] FLAG: --kube-api-burst="100" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443205 4746 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443209 4746 flags.go:64] FLAG: --kube-api-qps="50" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443213 4746 flags.go:64] FLAG: --kube-reserved="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443217 4746 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443221 4746 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443225 4746 flags.go:64] FLAG: --kubelet-cgroups="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443229 4746 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443233 4746 flags.go:64] FLAG: --lock-file="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443237 4746 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443241 4746 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443247 4746 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443253 4746 flags.go:64] FLAG: --log-json-split-stream="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443257 4746 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443261 4746 flags.go:64] FLAG: --log-text-split-stream="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443265 4746 flags.go:64] FLAG: --logging-format="text" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443269 4746 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443274 4746 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443278 4746 flags.go:64] FLAG: --manifest-url="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443281 4746 flags.go:64] FLAG: --manifest-url-header="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443287 4746 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443291 4746 flags.go:64] FLAG: --max-open-files="1000000" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443296 4746 flags.go:64] FLAG: --max-pods="110" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443300 4746 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443304 4746 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443308 4746 flags.go:64] FLAG: --memory-manager-policy="None" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443312 4746 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443316 4746 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443322 4746 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443326 4746 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443335 4746 flags.go:64] FLAG: --node-status-max-images="50" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443339 4746 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443343 4746 flags.go:64] FLAG: --oom-score-adj="-999" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443348 4746 flags.go:64] FLAG: --pod-cidr="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443352 4746 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443358 4746 flags.go:64] FLAG: --pod-manifest-path="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443362 4746 flags.go:64] FLAG: --pod-max-pids="-1" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443366 4746 flags.go:64] FLAG: --pods-per-core="0" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443370 4746 flags.go:64] FLAG: --port="10250" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443374 4746 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443378 4746 flags.go:64] FLAG: --provider-id="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443382 4746 flags.go:64] FLAG: --qos-reserved="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443386 4746 flags.go:64] FLAG: --read-only-port="10255" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443390 4746 flags.go:64] FLAG: --register-node="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443394 4746 flags.go:64] FLAG: --register-schedulable="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443398 4746 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443407 4746 flags.go:64] FLAG: --registry-burst="10" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443412 4746 flags.go:64] FLAG: --registry-qps="5" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443416 4746 flags.go:64] FLAG: --reserved-cpus="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443420 4746 flags.go:64] FLAG: --reserved-memory="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443425 4746 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443430 4746 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443435 4746 flags.go:64] FLAG: --rotate-certificates="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443439 4746 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443443 4746 flags.go:64] FLAG: --runonce="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443448 4746 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443452 4746 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443457 4746 flags.go:64] FLAG: --seccomp-default="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443461 4746 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443465 4746 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443470 4746 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443475 4746 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443480 4746 flags.go:64] FLAG: --storage-driver-password="root" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443485 4746 flags.go:64] FLAG: --storage-driver-secure="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443490 4746 flags.go:64] FLAG: --storage-driver-table="stats" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443496 4746 flags.go:64] FLAG: --storage-driver-user="root" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443501 4746 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443506 4746 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443511 4746 flags.go:64] FLAG: --system-cgroups="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443529 4746 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443554 4746 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443559 4746 flags.go:64] FLAG: --tls-cert-file="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443564 4746 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443570 4746 flags.go:64] FLAG: --tls-min-version="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443574 4746 flags.go:64] FLAG: --tls-private-key-file="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443578 4746 flags.go:64] FLAG: --topology-manager-policy="none" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443582 4746 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443586 4746 flags.go:64] FLAG: --topology-manager-scope="container" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443591 4746 flags.go:64] FLAG: --v="2" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443596 4746 flags.go:64] FLAG: --version="false" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443605 4746 flags.go:64] FLAG: --vmodule="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443610 4746 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.443614 4746 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444342 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444353 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444358 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444362 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444365 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444369 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444373 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444377 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444382 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444386 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444393 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444399 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444410 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444415 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444419 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444424 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444429 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444433 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444438 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444442 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444446 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444450 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444456 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444465 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444471 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444480 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444484 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444488 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444493 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444497 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444501 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444506 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444511 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444518 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444523 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444529 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444536 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444541 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444550 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444555 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444559 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444564 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444569 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444573 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444577 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444582 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444588 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444594 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444599 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444604 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444613 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444618 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444622 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444627 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444632 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444636 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444641 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444646 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444655 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444659 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444664 4746 feature_gate.go:330] unrecognized feature gate: Example Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444667 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444671 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444680 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444684 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444690 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444697 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444702 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444706 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444711 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.444717 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.444726 4746 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.452957 4746 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.452980 4746 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453079 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453089 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453095 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453099 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453118 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453123 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453127 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453132 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453136 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453140 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453144 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453148 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453153 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453156 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453160 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453164 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453169 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453174 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453183 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453193 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453199 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453205 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453210 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453217 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453223 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453228 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453234 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453239 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453244 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453248 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453251 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453255 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453259 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453263 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453267 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453271 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453275 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453282 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453290 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453296 4746 feature_gate.go:330] unrecognized feature gate: Example Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453300 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453305 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453310 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453314 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453318 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453323 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453328 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453332 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453337 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453342 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453346 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453351 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453355 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453360 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453365 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453370 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453375 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453380 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453385 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453389 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453394 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453398 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453402 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453407 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453411 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453417 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453421 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453424 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453428 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453432 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453435 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.453441 4746 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453559 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453570 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453575 4746 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453579 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453583 4746 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453586 4746 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453590 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453594 4746 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453597 4746 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453601 4746 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453604 4746 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453608 4746 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453612 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453616 4746 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453620 4746 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453623 4746 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453627 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453631 4746 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453634 4746 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453638 4746 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453641 4746 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453645 4746 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453648 4746 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453653 4746 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453658 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453662 4746 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453666 4746 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453671 4746 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453675 4746 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453679 4746 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453684 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453688 4746 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453692 4746 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453695 4746 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453699 4746 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453703 4746 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453707 4746 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453711 4746 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453715 4746 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453719 4746 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453723 4746 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453727 4746 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453731 4746 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453735 4746 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453739 4746 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453745 4746 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453749 4746 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453754 4746 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453758 4746 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453763 4746 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453767 4746 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453772 4746 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453776 4746 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453780 4746 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453783 4746 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453787 4746 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453791 4746 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453795 4746 feature_gate.go:330] unrecognized feature gate: Example Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453799 4746 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453804 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453808 4746 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453813 4746 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453817 4746 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453822 4746 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453828 4746 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453834 4746 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453838 4746 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453843 4746 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453848 4746 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453852 4746 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.453857 4746 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.453864 4746 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.454231 4746 server.go:940] "Client rotation is on, will bootstrap in background" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.461178 4746 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.461258 4746 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.461763 4746 server.go:997] "Starting client certificate rotation" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.461792 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.462065 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-11 00:37:07.443936919 +0000 UTC Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.462195 4746 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 734h43m19.981746333s for next certificate rotation Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.470035 4746 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.471467 4746 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.481507 4746 log.go:25] "Validated CRI v1 runtime API" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.500213 4746 log.go:25] "Validated CRI v1 image API" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.501718 4746 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.504391 4746 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-11-09-49-01-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.504420 4746 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.518734 4746 manager.go:217] Machine: {Timestamp:2025-12-11 09:53:47.517720711 +0000 UTC m=+0.377584034 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:5d868abb-9952-4b6b-be6c-e2bc736f8f4d BootID:d3edc674-d518-40d2-b40b-af693a175be6 Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:63:d1:54 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:63:d1:54 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:fb:f0:4d Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:4b:57:9e Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:d2:c6:f6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:fa:57:3e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9e:f3:ba:0e:76:96 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:de:1b:11:73:d2:03 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.518946 4746 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.519143 4746 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.519454 4746 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.519599 4746 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.519622 4746 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.519898 4746 topology_manager.go:138] "Creating topology manager with none policy" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.519912 4746 container_manager_linux.go:303] "Creating device plugin manager" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.520115 4746 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.520139 4746 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.520287 4746 state_mem.go:36] "Initialized new in-memory state store" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.520365 4746 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.562506 4746 kubelet.go:418] "Attempting to sync node with API server" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.562573 4746 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.562621 4746 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.562641 4746 kubelet.go:324] "Adding apiserver pod source" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.562906 4746 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.575843 4746 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.576430 4746 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.577270 4746 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.577920 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.577961 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.577974 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.577987 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.578004 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.578013 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.578078 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.578093 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.578103 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.578113 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.578148 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.578158 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.578381 4746 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.578862 4746 server.go:1280] "Started kubelet" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.579617 4746 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.580011 4746 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 11 09:53:47 crc systemd[1]: Started Kubernetes Kubelet. Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.579724 4746 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.581535 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.581594 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.581715 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.581596 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.581763 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.581903 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.581942 4746 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.582690 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 09:27:02.333149845 +0000 UTC Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.582916 4746 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.582943 4746 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.583091 4746 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.583707 4746 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.583919 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.583967 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.583859 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.214:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1880208933ad26f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 09:53:47.578828535 +0000 UTC m=+0.438691868,LastTimestamp:2025-12-11 09:53:47.578828535 +0000 UTC m=+0.438691868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.584458 4746 server.go:460] "Adding debug handlers to kubelet server" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.587836 4746 factory.go:55] Registering systemd factory Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.587853 4746 factory.go:221] Registration of the systemd container factory successfully Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.588563 4746 factory.go:153] Registering CRI-O factory Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.588579 4746 factory.go:221] Registration of the crio container factory successfully Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.588647 4746 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.588680 4746 factory.go:103] Registering Raw factory Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.588699 4746 manager.go:1196] Started watching for new ooms in manager Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.588863 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="200ms" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.590617 4746 manager.go:319] Starting recovery of all containers Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592037 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592096 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592108 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592118 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592127 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592136 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592144 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592152 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592163 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592171 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592180 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592200 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592217 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592233 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592251 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592291 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592301 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592313 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592323 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592334 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592343 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592358 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592369 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592379 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592389 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592399 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592409 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592419 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592429 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592439 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592448 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592479 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592496 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592505 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592515 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592524 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592533 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592544 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592553 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592565 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592575 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592585 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592594 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592604 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592613 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592625 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592636 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592648 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592659 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.592670 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.595534 4746 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.595649 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.595714 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.595778 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.595836 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.595936 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.595997 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596070 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596131 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596188 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596244 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596310 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596365 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596422 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596483 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596540 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596628 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596703 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596766 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596825 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596886 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.596956 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597067 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597132 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597189 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597244 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597300 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597370 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597425 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597480 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597533 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597584 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597645 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597699 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597751 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597803 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597857 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597918 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.597990 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598100 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598168 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598237 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598297 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598353 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598406 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598460 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598514 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598566 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598624 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598678 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598750 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598837 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598913 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.598997 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.599073 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.599140 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.599199 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.599259 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.599386 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.599447 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.599510 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.599570 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.599879 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.599939 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600000 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600097 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600164 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600221 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600275 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600349 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600404 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600456 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600510 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600582 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600678 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600742 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600796 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600857 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600914 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.600975 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601029 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601106 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601160 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601217 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601275 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601341 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601475 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601541 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601599 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601652 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601713 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601766 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601837 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.601904 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602000 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602111 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602205 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602276 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602330 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602385 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602440 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602504 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602557 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602609 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602662 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602722 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602786 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602842 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602896 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.602949 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603004 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603091 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603149 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603204 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603258 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603319 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603391 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603459 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603514 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603583 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603661 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603718 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603779 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603836 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603890 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.603946 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.604002 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.604155 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605033 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605134 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605192 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605234 4746 manager.go:324] Recovery completed Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605246 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605595 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605629 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605643 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605656 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605672 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605686 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605699 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605711 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605725 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605736 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605749 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605760 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605771 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605783 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605794 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605805 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605816 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605828 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605841 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605851 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605866 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605877 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605887 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605899 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605910 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605921 4746 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605932 4746 reconstruct.go:97] "Volume reconstruction finished" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.605940 4746 reconciler.go:26] "Reconciler: start to sync state" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.613812 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.617111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.617146 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.617158 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.618905 4746 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.618928 4746 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.618952 4746 state_mem.go:36] "Initialized new in-memory state store" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.625632 4746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.629005 4746 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.629060 4746 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.629086 4746 kubelet.go:2335] "Starting kubelet main sync loop" Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.629128 4746 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 11 09:53:47 crc kubenswrapper[4746]: W1211 09:53:47.632086 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.632177 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.632895 4746 policy_none.go:49] "None policy: Start" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.633688 4746 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.633758 4746 state_mem.go:35] "Initializing new in-memory state store" Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.663944 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.214:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.1880208933ad26f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 09:53:47.578828535 +0000 UTC m=+0.438691868,LastTimestamp:2025-12-11 09:53:47.578828535 +0000 UTC m=+0.438691868,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.678446 4746 manager.go:334] "Starting Device Plugin manager" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.679572 4746 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.679597 4746 server.go:79] "Starting device plugin registration server" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.681471 4746 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.681511 4746 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.681647 4746 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.681755 4746 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.681761 4746 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.690999 4746 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.729329 4746 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.729488 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.730848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.730972 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.731078 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.731262 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.731427 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.731482 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.732359 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.732384 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.732396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.732462 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.732485 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.732494 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.732512 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.732669 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.732726 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.733499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.733532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.733549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.733687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.733723 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.733742 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.733805 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.733904 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.733950 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.735658 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.735689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.735704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.735955 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.735967 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.736082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.736093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.736127 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.736196 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.736773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.736797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.736804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.736943 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.736996 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.737101 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.737123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.737134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.737579 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.737600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.737607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.782065 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.783072 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.783107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.783119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.783138 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.783515 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.790243 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="400ms" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.807746 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.807858 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.807905 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.807936 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.807970 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.808001 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.808033 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.808159 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.808362 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.808403 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.808424 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.808443 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.808459 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.808475 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.808492 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909173 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909235 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909268 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909297 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909324 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909347 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909371 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909398 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909428 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909443 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909460 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909506 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909452 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909519 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909482 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909486 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909441 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909467 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909578 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909639 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909739 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909785 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909824 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909848 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909853 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909863 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909881 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909892 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.909927 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.910019 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.984137 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.985768 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.985816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.985830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:47 crc kubenswrapper[4746]: I1211 09:53:47.985854 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 09:53:47 crc kubenswrapper[4746]: E1211 09:53:47.986356 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.066326 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.072307 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.091888 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 11 09:53:48 crc kubenswrapper[4746]: W1211 09:53:48.107687 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-e785c3f38a142578cf4982753d93333c897caed645ab953bf511075fb7d542a5 WatchSource:0}: Error finding container e785c3f38a142578cf4982753d93333c897caed645ab953bf511075fb7d542a5: Status 404 returned error can't find the container with id e785c3f38a142578cf4982753d93333c897caed645ab953bf511075fb7d542a5 Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.109037 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 11 09:53:48 crc kubenswrapper[4746]: W1211 09:53:48.110124 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-e877dce0e67b6cb23350f857c2ae8a4bfebb71f5f82e6bccd8c6052515e7e31b WatchSource:0}: Error finding container e877dce0e67b6cb23350f857c2ae8a4bfebb71f5f82e6bccd8c6052515e7e31b: Status 404 returned error can't find the container with id e877dce0e67b6cb23350f857c2ae8a4bfebb71f5f82e6bccd8c6052515e7e31b Dec 11 09:53:48 crc kubenswrapper[4746]: W1211 09:53:48.114614 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-5006f4eaea3eee77096563dfc7de4069856c64be7d56c058f3b938d3efe809b7 WatchSource:0}: Error finding container 5006f4eaea3eee77096563dfc7de4069856c64be7d56c058f3b938d3efe809b7: Status 404 returned error can't find the container with id 5006f4eaea3eee77096563dfc7de4069856c64be7d56c058f3b938d3efe809b7 Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.115102 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:48 crc kubenswrapper[4746]: W1211 09:53:48.127006 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-745ea3f44c26838cfc9eed46b1f8a2ec652435febeba598df88507f227dcd275 WatchSource:0}: Error finding container 745ea3f44c26838cfc9eed46b1f8a2ec652435febeba598df88507f227dcd275: Status 404 returned error can't find the container with id 745ea3f44c26838cfc9eed46b1f8a2ec652435febeba598df88507f227dcd275 Dec 11 09:53:48 crc kubenswrapper[4746]: W1211 09:53:48.145326 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-e452bb5fde8c540017261798f6431906c793bd5c1dcc0bba72f85ae0556e7462 WatchSource:0}: Error finding container e452bb5fde8c540017261798f6431906c793bd5c1dcc0bba72f85ae0556e7462: Status 404 returned error can't find the container with id e452bb5fde8c540017261798f6431906c793bd5c1dcc0bba72f85ae0556e7462 Dec 11 09:53:48 crc kubenswrapper[4746]: E1211 09:53:48.190981 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="800ms" Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.387233 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.389415 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.389457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.389467 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.389491 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 09:53:48 crc kubenswrapper[4746]: E1211 09:53:48.390017 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Dec 11 09:53:48 crc kubenswrapper[4746]: W1211 09:53:48.411498 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:48 crc kubenswrapper[4746]: E1211 09:53:48.411625 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.582562 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.583584 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 02:36:04.254590864 +0000 UTC Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.583623 4746 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 112h42m15.670970264s for next certificate rotation Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.634220 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e785c3f38a142578cf4982753d93333c897caed645ab953bf511075fb7d542a5"} Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.635442 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e452bb5fde8c540017261798f6431906c793bd5c1dcc0bba72f85ae0556e7462"} Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.636666 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"745ea3f44c26838cfc9eed46b1f8a2ec652435febeba598df88507f227dcd275"} Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.637887 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"5006f4eaea3eee77096563dfc7de4069856c64be7d56c058f3b938d3efe809b7"} Dec 11 09:53:48 crc kubenswrapper[4746]: I1211 09:53:48.639029 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e877dce0e67b6cb23350f857c2ae8a4bfebb71f5f82e6bccd8c6052515e7e31b"} Dec 11 09:53:48 crc kubenswrapper[4746]: W1211 09:53:48.766203 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:48 crc kubenswrapper[4746]: E1211 09:53:48.766299 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Dec 11 09:53:48 crc kubenswrapper[4746]: W1211 09:53:48.810866 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:48 crc kubenswrapper[4746]: E1211 09:53:48.811152 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Dec 11 09:53:48 crc kubenswrapper[4746]: W1211 09:53:48.916938 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:48 crc kubenswrapper[4746]: E1211 09:53:48.917032 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Dec 11 09:53:48 crc kubenswrapper[4746]: E1211 09:53:48.991995 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="1.6s" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.190435 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.192462 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.192508 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.192526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.192561 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 09:53:49 crc kubenswrapper[4746]: E1211 09:53:49.193183 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.582218 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.643816 4746 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="56c7c2aa376c17582924cc7177f101ff4540b61913daeefbb126e6d3fe884dfd" exitCode=0 Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.643897 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"56c7c2aa376c17582924cc7177f101ff4540b61913daeefbb126e6d3fe884dfd"} Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.643975 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.644788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.644823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.644835 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.645366 4746 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4198aee534e1b263316ac333389124e1416f15b759253ea951e004336f2b86ae" exitCode=0 Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.645415 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.645424 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4198aee534e1b263316ac333389124e1416f15b759253ea951e004336f2b86ae"} Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.646703 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.646732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.646746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.647814 4746 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f" exitCode=0 Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.647890 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f"} Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.647931 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.648572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.648590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.648601 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.650359 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad"} Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.650393 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5"} Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.650408 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348"} Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.650420 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2"} Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.650434 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.651081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.651112 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.651124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.651580 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a" exitCode=0 Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.651612 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a"} Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.651699 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.652470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.652492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.652502 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.657775 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.658560 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.658589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:49 crc kubenswrapper[4746]: I1211 09:53:49.658604 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:50 crc kubenswrapper[4746]: W1211 09:53:50.545036 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:50 crc kubenswrapper[4746]: E1211 09:53:50.545134 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Dec 11 09:53:50 crc kubenswrapper[4746]: E1211 09:53:50.593371 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="3.2s" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.658380 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"97a1e1264ef536cc9003210b7d9fc47faa3d6b2b80bf328fb0fafddb91e1ad5f"} Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.658450 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d497e87b108ba839d40a853ccdf1c277759bd4b987852da432b7f876d1ce1890"} Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.658473 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ffb65884bd374a1f68af73bd917b4b030ca7c890d5dbdeace452d63bcfc8d99d"} Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.658400 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.659665 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.659698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.659707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.661022 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631"} Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.661077 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8"} Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.661092 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2"} Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.662769 4746 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c4f3842a712af1579cc18307fa2a8a688181aa6bb7ca33d07ddbb19b3f6caeda" exitCode=0 Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.662856 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c4f3842a712af1579cc18307fa2a8a688181aa6bb7ca33d07ddbb19b3f6caeda"} Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.662970 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.664222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.664245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.664254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.664241 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"319e929376197a4ad59aba9f64c20474d60de480e41f84cc01de5fbc7fa3bd8a"} Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.664271 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.664296 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.665591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.665632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.665644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.665672 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.665707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.665724 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.686776 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.793668 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.794867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.794913 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.794926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:50 crc kubenswrapper[4746]: I1211 09:53:50.794953 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 09:53:50 crc kubenswrapper[4746]: E1211 09:53:50.795437 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.214:6443: connect: connection refused" node="crc" Dec 11 09:53:50 crc kubenswrapper[4746]: W1211 09:53:50.802035 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.214:6443: connect: connection refused Dec 11 09:53:50 crc kubenswrapper[4746]: E1211 09:53:50.802146 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.214:6443: connect: connection refused" logger="UnhandledError" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.669337 4746 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4c70f6bd91f285434c842b6bae9e5f75f4cb5dd7dff272c42e89286a4a7f0ebd" exitCode=0 Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.669383 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4c70f6bd91f285434c842b6bae9e5f75f4cb5dd7dff272c42e89286a4a7f0ebd"} Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.669449 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.670267 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.670296 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.670312 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.674093 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627"} Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.674132 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca"} Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.674139 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.674160 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.674189 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.674327 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.674927 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.674943 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.674951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.675221 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.675278 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.675302 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.676061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.676097 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.676109 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:51 crc kubenswrapper[4746]: I1211 09:53:51.924286 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.083815 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.679109 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"63ebf675f4303f52f062fc79f3751366b3853122b49af45814f19399bcdb798f"} Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.679169 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f33fd3e77fc5e56bba4d7007672d1e95456a26bc2715154b74ba89d17daff102"} Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.679176 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.679224 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.679182 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.680203 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.680238 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.680249 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.680648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.680680 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:52 crc kubenswrapper[4746]: I1211 09:53:52.680694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.303020 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.303212 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.304284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.304321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.304332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.685805 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.686168 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.686643 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.686928 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0204a90722254d27793131bdc8f28fc1843c69ea7e36d667fbfa8a634ebbd082"} Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.686964 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dcce5c2f2a96abf7c2f9f937e3a7fffbf46ad2b8a0aa4d1398f9aaaf24cec19a"} Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.686974 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f7f5537cd0053c95a63c6cacc4cf0fb7137b16eb68fa1e330a6eafb601a73ad8"} Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.687909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.687963 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.687977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.688130 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.688154 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.688170 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.996127 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.997457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.997505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.997518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:53 crc kubenswrapper[4746]: I1211 09:53:53.997547 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 09:53:54 crc kubenswrapper[4746]: I1211 09:53:54.689016 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:54 crc kubenswrapper[4746]: I1211 09:53:54.690175 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:54 crc kubenswrapper[4746]: I1211 09:53:54.690201 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:54 crc kubenswrapper[4746]: I1211 09:53:54.690212 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:54 crc kubenswrapper[4746]: I1211 09:53:54.702506 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.079831 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.340860 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.341233 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.341307 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.343180 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.343230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.343241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.691874 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.692816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.692864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.692882 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.869291 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.869528 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.871173 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.871331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.871436 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:55 crc kubenswrapper[4746]: I1211 09:53:55.874322 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.249361 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.249625 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.251567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.251624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.251639 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.694881 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.694947 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.695011 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.696154 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.696206 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.696233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.696246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.696234 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.696350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:56 crc kubenswrapper[4746]: I1211 09:53:56.900372 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:57 crc kubenswrapper[4746]: E1211 09:53:57.691121 4746 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 09:53:57 crc kubenswrapper[4746]: I1211 09:53:57.696687 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:57 crc kubenswrapper[4746]: I1211 09:53:57.697684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:57 crc kubenswrapper[4746]: I1211 09:53:57.697713 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:57 crc kubenswrapper[4746]: I1211 09:53:57.697723 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:58 crc kubenswrapper[4746]: I1211 09:53:58.699550 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:58 crc kubenswrapper[4746]: I1211 09:53:58.700548 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:58 crc kubenswrapper[4746]: I1211 09:53:58.700589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:58 crc kubenswrapper[4746]: I1211 09:53:58.700600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:58 crc kubenswrapper[4746]: I1211 09:53:58.704155 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:53:59 crc kubenswrapper[4746]: I1211 09:53:59.701630 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:53:59 crc kubenswrapper[4746]: I1211 09:53:59.703181 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:53:59 crc kubenswrapper[4746]: I1211 09:53:59.703213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:53:59 crc kubenswrapper[4746]: I1211 09:53:59.703293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:53:59 crc kubenswrapper[4746]: I1211 09:53:59.901587 4746 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 09:53:59 crc kubenswrapper[4746]: I1211 09:53:59.901741 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 09:54:01 crc kubenswrapper[4746]: W1211 09:54:01.498064 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 11 09:54:01 crc kubenswrapper[4746]: I1211 09:54:01.498168 4746 trace.go:236] Trace[207119678]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 09:53:51.497) (total time: 10000ms): Dec 11 09:54:01 crc kubenswrapper[4746]: Trace[207119678]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (09:54:01.498) Dec 11 09:54:01 crc kubenswrapper[4746]: Trace[207119678]: [10.000879387s] [10.000879387s] END Dec 11 09:54:01 crc kubenswrapper[4746]: E1211 09:54:01.498194 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 11 09:54:01 crc kubenswrapper[4746]: I1211 09:54:01.583332 4746 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 11 09:54:01 crc kubenswrapper[4746]: W1211 09:54:01.657006 4746 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 11 09:54:01 crc kubenswrapper[4746]: I1211 09:54:01.657108 4746 trace.go:236] Trace[1140039928]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 09:53:51.656) (total time: 10001ms): Dec 11 09:54:01 crc kubenswrapper[4746]: Trace[1140039928]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (09:54:01.656) Dec 11 09:54:01 crc kubenswrapper[4746]: Trace[1140039928]: [10.001013215s] [10.001013215s] END Dec 11 09:54:01 crc kubenswrapper[4746]: E1211 09:54:01.657127 4746 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 11 09:54:02 crc kubenswrapper[4746]: I1211 09:54:02.084593 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 09:54:02 crc kubenswrapper[4746]: I1211 09:54:02.084673 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 09:54:02 crc kubenswrapper[4746]: I1211 09:54:02.881917 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 11 09:54:02 crc kubenswrapper[4746]: I1211 09:54:02.882018 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 11 09:54:05 crc kubenswrapper[4746]: I1211 09:54:05.118964 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 11 09:54:05 crc kubenswrapper[4746]: I1211 09:54:05.119280 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:54:05 crc kubenswrapper[4746]: I1211 09:54:05.120836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:05 crc kubenswrapper[4746]: I1211 09:54:05.121091 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:05 crc kubenswrapper[4746]: I1211 09:54:05.121318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:05 crc kubenswrapper[4746]: I1211 09:54:05.137601 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 11 09:54:05 crc kubenswrapper[4746]: I1211 09:54:05.716464 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:54:05 crc kubenswrapper[4746]: I1211 09:54:05.717803 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:05 crc kubenswrapper[4746]: I1211 09:54:05.717848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:05 crc kubenswrapper[4746]: I1211 09:54:05.717882 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:06 crc kubenswrapper[4746]: I1211 09:54:06.173235 4746 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 09:54:06 crc kubenswrapper[4746]: I1211 09:54:06.950297 4746 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.094410 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.094584 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.096586 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.096622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.096633 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.101596 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:54:07 crc kubenswrapper[4746]: E1211 09:54:07.706931 4746 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.724372 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.724426 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.725468 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.725498 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.725508 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:07 crc kubenswrapper[4746]: E1211 09:54:07.873599 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 11 09:54:07 crc kubenswrapper[4746]: E1211 09:54:07.876187 4746 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.895458 4746 trace.go:236] Trace[1128310569]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 09:53:55.160) (total time: 12735ms): Dec 11 09:54:07 crc kubenswrapper[4746]: Trace[1128310569]: ---"Objects listed" error: 12735ms (09:54:07.895) Dec 11 09:54:07 crc kubenswrapper[4746]: Trace[1128310569]: [12.735164018s] [12.735164018s] END Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.895481 4746 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.895896 4746 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.896111 4746 trace.go:236] Trace[1707823375]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (11-Dec-2025 09:53:54.931) (total time: 12964ms): Dec 11 09:54:07 crc kubenswrapper[4746]: Trace[1707823375]: ---"Objects listed" error: 12964ms (09:54:07.895) Dec 11 09:54:07 crc kubenswrapper[4746]: Trace[1707823375]: [12.964532726s] [12.964532726s] END Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.896134 4746 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.912823 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35342->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.913135 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35342->192.168.126.11:17697: read: connection reset by peer" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.912837 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35350->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.913745 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35350->192.168.126.11:17697: read: connection reset by peer" Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.914221 4746 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 11 09:54:07 crc kubenswrapper[4746]: I1211 09:54:07.914288 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.105270 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.112153 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.576748 4746 apiserver.go:52] "Watching apiserver" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.579765 4746 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.580194 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.580930 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.580946 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.581098 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.581201 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.581244 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.581217 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.581369 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.581474 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.581499 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.583436 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.583819 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.583824 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.584267 4746 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.584695 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.585195 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.585275 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.585362 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.585831 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.586075 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602682 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602753 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602787 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602814 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602840 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602869 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602896 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602922 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602945 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602970 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.602998 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603029 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603070 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603096 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603123 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603157 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603185 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603216 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603250 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603250 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603277 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603396 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603453 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603520 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603596 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603652 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603698 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603745 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603794 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603842 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603888 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603951 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603991 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604037 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604113 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604159 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604211 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604259 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604306 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604357 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604444 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604544 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604628 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604702 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604778 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604845 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604909 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604986 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605105 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605175 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605238 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605314 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605383 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605454 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605522 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605604 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605688 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605770 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605835 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605910 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.606676 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.606821 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.606907 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.606961 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607000 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607042 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607104 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607169 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607220 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607263 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607309 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607354 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607404 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607450 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607491 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607535 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607580 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607623 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607671 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607749 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607797 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607841 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607881 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607919 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607964 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608005 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608071 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608119 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608163 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608203 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608250 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608294 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608339 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608382 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608428 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608472 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608518 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608619 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608664 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608709 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608754 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608792 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608833 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608880 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608924 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608967 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609011 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609076 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609121 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609165 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609214 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609260 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609438 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609489 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609538 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609582 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609630 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609674 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609721 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609770 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609813 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609879 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609937 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609993 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610078 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610157 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610215 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610289 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610344 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610402 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610457 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610510 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610567 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610622 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610685 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610730 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610757 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610784 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610816 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610843 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610869 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610896 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610924 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610951 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.610976 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611003 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611029 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611100 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611132 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611158 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611716 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611750 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611777 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611803 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611831 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611858 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611909 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611938 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611966 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611996 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612027 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612075 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612104 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612130 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612159 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.603843 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612192 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612226 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612255 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604064 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604526 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.604633 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605078 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605286 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605585 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.605816 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.606125 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.606577 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.606600 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.606792 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.606786 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607090 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607205 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607456 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607517 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607678 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607782 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.607867 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608033 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608353 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608396 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.608866 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609258 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.609282 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.611751 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612155 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612262 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612588 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612614 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612624 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612657 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612689 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612715 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612748 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612772 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612795 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612819 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612842 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612864 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612883 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612905 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612929 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612956 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612982 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.612992 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613009 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613036 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613083 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613108 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613135 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613162 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613187 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613211 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613242 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613267 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613291 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613314 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613341 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613393 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613423 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613452 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613482 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613519 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613594 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613621 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613646 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613675 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613701 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613726 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613751 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613782 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613809 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613908 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614132 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614153 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614170 4746 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614183 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614197 4746 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614209 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.627978 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613337 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613410 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613612 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613843 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613887 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613946 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.613978 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.653000 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.613996 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614116 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614222 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614327 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614350 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614480 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.614408 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.615866 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.615897 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.616073 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.616181 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.616423 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.616658 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.616756 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:54:09.116734509 +0000 UTC m=+21.976597892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.616892 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.617213 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.617410 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.617451 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.617476 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.617712 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.617755 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.617826 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.618000 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.618000 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.618142 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.618339 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.618454 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.618626 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.618734 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.618775 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.618912 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.619019 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.619081 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.619182 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.619444 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.625205 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.625236 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.625385 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626146 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626155 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626176 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626297 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626308 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626376 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626521 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626613 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626737 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626813 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626828 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626850 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.626997 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.627027 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.627176 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.627395 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.627410 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.627917 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.627952 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.627993 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.628024 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.628330 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.628376 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.628401 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.628542 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.628677 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.628728 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.628757 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.628865 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.629040 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.629324 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.629419 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.629485 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.619952 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.629849 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.629912 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.629910 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.629929 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.630128 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.630155 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.630190 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.630222 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.630458 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.630626 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.631090 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.631119 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.631234 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.631430 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.631563 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.631771 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.631823 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.632181 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.633145 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.634684 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.634809 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.634880 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.635181 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.635255 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.635599 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.635617 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.635709 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.635811 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.636663 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.636891 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.636935 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.637030 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.637144 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.637232 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.637407 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.637418 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.637699 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.637920 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.638213 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.638087 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.638523 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.638634 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.639166 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.639199 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.639434 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.639650 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.639709 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.639790 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.640202 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.640227 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.640244 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.640281 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.640307 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.640535 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.640563 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.640573 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.640650 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.641494 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.641534 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.641540 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.642587 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.642611 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.643924 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.652452 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.652920 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.653022 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.653844 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.653339 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.653490 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.653516 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.653532 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.653762 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.653855 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.653997 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.654073 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:09.15403593 +0000 UTC m=+22.013899323 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.654104 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:09.154094172 +0000 UTC m=+22.013957585 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.654123 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.654195 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.654668 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.654793 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.654939 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.652939 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.658103 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.659103 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.660297 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.660562 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.660612 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.660870 4746 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.662893 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.668070 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.668204 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.668291 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.668373 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.668497 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:09.1684765 +0000 UTC m=+22.028339893 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.668595 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.668672 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.668767 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:09.168756608 +0000 UTC m=+22.028620021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.674067 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.674383 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.675766 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.676152 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.677925 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.678450 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.679210 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.684607 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.689981 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.696292 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.723120 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.723214 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.724112 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.724129 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.724400 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.724648 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.724817 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.724899 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.724959 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725016 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725099 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725163 4746 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725224 4746 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725280 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725346 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725404 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725463 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725518 4746 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725586 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725640 4746 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725702 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725760 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725824 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725882 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725941 4746 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.725996 4746 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726071 4746 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726128 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726189 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726243 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726340 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726399 4746 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726465 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726522 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726574 4746 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726634 4746 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726695 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726755 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726812 4746 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726871 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.726957 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727013 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727081 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727145 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727199 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727280 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727353 4746 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727421 4746 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727556 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727621 4746 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727671 4746 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727731 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727786 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727840 4746 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727898 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.727951 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728005 4746 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728084 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728147 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728201 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728260 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728315 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728368 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728418 4746 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728476 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728535 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728592 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728648 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728707 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728762 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728817 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728883 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.728949 4746 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729003 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729072 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729130 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729194 4746 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729254 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729308 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729363 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729418 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729468 4746 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729529 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729585 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729636 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729691 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729748 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729807 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729865 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729921 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729977 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730033 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730111 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730169 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730225 4746 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730281 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730374 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730426 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730483 4746 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730538 4746 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730593 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730650 4746 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730701 4746 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730753 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730808 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730865 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730920 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.730978 4746 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731034 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731103 4746 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731163 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731223 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731281 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731338 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731393 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731448 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731507 4746 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731561 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731613 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731680 4746 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731736 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731791 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731849 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731902 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.731958 4746 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732013 4746 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732085 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732140 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732266 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732322 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732376 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732434 4746 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732492 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732554 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732610 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732666 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732717 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732769 4746 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732820 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732876 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732928 4746 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.732983 4746 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733039 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733109 4746 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733161 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733222 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733279 4746 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733332 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733389 4746 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733442 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733500 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733553 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733605 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733655 4746 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733710 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733765 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733826 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733878 4746 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733930 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.733983 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734033 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734110 4746 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734167 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734225 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734281 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734353 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734410 4746 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734465 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734518 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734575 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734628 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734686 4746 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734753 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734830 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734885 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734940 4746 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.734992 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735065 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735131 4746 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735188 4746 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735248 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735305 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735359 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735414 4746 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735471 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735523 4746 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735578 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735629 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735694 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735769 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735828 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.735898 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.729076 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.737482 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627" exitCode=255 Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.738032 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627"} Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.756335 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: E1211 09:54:08.756758 4746 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.763986 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.766298 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.854315 4746 scope.go:117] "RemoveContainer" containerID="c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.868488 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.878621 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.894323 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.914472 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.928681 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.954978 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 11 09:54:08 crc kubenswrapper[4746]: W1211 09:54:08.955278 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-e31d6fb2321dcb7fececb1892aaee581e5679dbc5cf4282bb57e2befd34d463e WatchSource:0}: Error finding container e31d6fb2321dcb7fececb1892aaee581e5679dbc5cf4282bb57e2befd34d463e: Status 404 returned error can't find the container with id e31d6fb2321dcb7fececb1892aaee581e5679dbc5cf4282bb57e2befd34d463e Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.958342 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.964430 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.975024 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:08 crc kubenswrapper[4746]: W1211 09:54:08.976010 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-260c4ab63af2bc723ace27d3462e47a5cae2b541d3c990d6530e5f55bfb92c50 WatchSource:0}: Error finding container 260c4ab63af2bc723ace27d3462e47a5cae2b541d3c990d6530e5f55bfb92c50: Status 404 returned error can't find the container with id 260c4ab63af2bc723ace27d3462e47a5cae2b541d3c990d6530e5f55bfb92c50 Dec 11 09:54:08 crc kubenswrapper[4746]: I1211 09:54:08.991894 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.003029 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.019824 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.139031 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.139186 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:54:10.139166415 +0000 UTC m=+22.999029738 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.239773 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.239824 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.239847 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.239869 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.239926 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.239947 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.239962 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.239972 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.239974 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.239988 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:10.239967667 +0000 UTC m=+23.099830980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.239935 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.240008 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:10.239997897 +0000 UTC m=+23.099861210 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.240020 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.240023 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:10.240015238 +0000 UTC m=+23.099878551 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.240030 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:09 crc kubenswrapper[4746]: E1211 09:54:09.240078 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:10.2400696 +0000 UTC m=+23.099932913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.634324 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.634907 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.636288 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.636941 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.637983 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.638545 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.639259 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.640318 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.640983 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.642060 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.642669 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.643877 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.644438 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.645033 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.646146 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.646725 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.647863 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.648308 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.648952 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.650872 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.651409 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.652759 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.653460 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.654640 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.655141 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.655822 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.657307 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.657879 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.658988 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.660691 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.662775 4746 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.662898 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.664746 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.665893 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.666953 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.668815 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.669559 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.671305 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.671953 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.673209 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.673756 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.674823 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.675560 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.676655 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.677197 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.678290 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.678873 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.680182 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.682140 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.682825 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.683784 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.684359 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.684898 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.686781 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.741447 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5"} Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.741506 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e31d6fb2321dcb7fececb1892aaee581e5679dbc5cf4282bb57e2befd34d463e"} Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.744217 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.746168 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c"} Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.746535 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.748024 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870"} Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.748086 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454"} Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.748102 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a06d662a938ee1f47fef10dfb9f393ab703e03a7c09b2ed39b90f0b1c6178653"} Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.749954 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"260c4ab63af2bc723ace27d3462e47a5cae2b541d3c990d6530e5f55bfb92c50"} Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.772585 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:09Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.809336 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:09Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.848215 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:09Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:09 crc kubenswrapper[4746]: I1211 09:54:09.942585 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:09Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.043617 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.057584 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.075268 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.098906 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.119823 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.130458 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.142844 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.153778 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.157463 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.157660 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:54:12.157629491 +0000 UTC m=+25.017492804 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.180344 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.232976 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.244347 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.258030 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.258090 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.258115 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.258139 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258265 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258282 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258292 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258347 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:12.258327849 +0000 UTC m=+25.118191172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258402 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258427 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:12.258418692 +0000 UTC m=+25.118282005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258487 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258513 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:12.258504534 +0000 UTC m=+25.118367857 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258564 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258577 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258586 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.258610 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:12.258602087 +0000 UTC m=+25.118465400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.339827 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.353994 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-6jfmh"] Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.354336 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6jfmh" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.357188 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.358328 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.358833 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.386677 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.406888 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.416615 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.429142 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.441403 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.454904 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.459195 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8cr2\" (UniqueName: \"kubernetes.io/projected/58f9b204-f3da-4add-be36-d1be33351d97-kube-api-access-s8cr2\") pod \"node-resolver-6jfmh\" (UID: \"58f9b204-f3da-4add-be36-d1be33351d97\") " pod="openshift-dns/node-resolver-6jfmh" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.459252 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/58f9b204-f3da-4add-be36-d1be33351d97-hosts-file\") pod \"node-resolver-6jfmh\" (UID: \"58f9b204-f3da-4add-be36-d1be33351d97\") " pod="openshift-dns/node-resolver-6jfmh" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.502609 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.519458 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.528635 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.559961 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/58f9b204-f3da-4add-be36-d1be33351d97-hosts-file\") pod \"node-resolver-6jfmh\" (UID: \"58f9b204-f3da-4add-be36-d1be33351d97\") " pod="openshift-dns/node-resolver-6jfmh" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.560029 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8cr2\" (UniqueName: \"kubernetes.io/projected/58f9b204-f3da-4add-be36-d1be33351d97-kube-api-access-s8cr2\") pod \"node-resolver-6jfmh\" (UID: \"58f9b204-f3da-4add-be36-d1be33351d97\") " pod="openshift-dns/node-resolver-6jfmh" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.560409 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/58f9b204-f3da-4add-be36-d1be33351d97-hosts-file\") pod \"node-resolver-6jfmh\" (UID: \"58f9b204-f3da-4add-be36-d1be33351d97\") " pod="openshift-dns/node-resolver-6jfmh" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.629898 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.630036 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.630404 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.630433 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.630479 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:10 crc kubenswrapper[4746]: E1211 09:54:10.630523 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.739661 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8cr2\" (UniqueName: \"kubernetes.io/projected/58f9b204-f3da-4add-be36-d1be33351d97-kube-api-access-s8cr2\") pod \"node-resolver-6jfmh\" (UID: \"58f9b204-f3da-4add-be36-d1be33351d97\") " pod="openshift-dns/node-resolver-6jfmh" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.777847 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mxwk6"] Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.778214 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.790473 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.790534 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.790722 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.790912 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.791101 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.795458 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-vtfvl"] Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.796029 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.796412 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w2s5z"] Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.797361 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.798429 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.798558 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.798627 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-r622c"] Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.798966 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.799575 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.803257 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.803580 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.804227 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.805879 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.806034 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.807645 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.807691 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.807910 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.810076 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.822845 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.833447 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.833778 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863156 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863212 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-run-k8s-cni-cncf-io\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863240 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phjqt\" (UniqueName: \"kubernetes.io/projected/6c9cd07c-9f4b-41bb-b29b-db9411c64336-kube-api-access-phjqt\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863290 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/70a89e1d-ff2b-4918-bae1-2f79d18396e8-proxy-tls\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863371 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/52ba00d9-b0ef-4496-a6b8-e170f405c592-cni-binary-copy\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863437 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6c9cd07c-9f4b-41bb-b29b-db9411c64336-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863470 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-var-lib-openvswitch\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863493 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-node-log\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863518 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-os-release\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863541 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-daemon-config\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863570 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/70a89e1d-ff2b-4918-bae1-2f79d18396e8-mcd-auth-proxy-config\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863595 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-socket-dir-parent\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863637 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-systemd-units\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.863659 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-ovn\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871576 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spfrq\" (UniqueName: \"kubernetes.io/projected/52ba00d9-b0ef-4496-a6b8-e170f405c592-kube-api-access-spfrq\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871627 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-cnibin\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871654 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-netd\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871674 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-cnibin\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871698 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-log-socket\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871717 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/014636cb-e768-4554-9556-460db2ebfdcb-ovn-node-metrics-cert\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871759 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-var-lib-cni-bin\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871785 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871804 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-netns\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871824 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-system-cni-dir\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871844 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-cni-dir\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871865 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-conf-dir\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871900 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-ovn-kubernetes\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871921 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-slash\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871939 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-systemd\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871964 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-kubelet\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.871982 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/70a89e1d-ff2b-4918-bae1-2f79d18396e8-rootfs\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872001 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzchj\" (UniqueName: \"kubernetes.io/projected/70a89e1d-ff2b-4918-bae1-2f79d18396e8-kube-api-access-zzchj\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872022 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-hostroot\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872072 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-openvswitch\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872123 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-script-lib\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872146 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49h5j\" (UniqueName: \"kubernetes.io/projected/014636cb-e768-4554-9556-460db2ebfdcb-kube-api-access-49h5j\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872169 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-run-multus-certs\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872190 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-system-cni-dir\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872209 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6c9cd07c-9f4b-41bb-b29b-db9411c64336-cni-binary-copy\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872230 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-os-release\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872256 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-etc-openvswitch\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872274 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-bin\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872292 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-var-lib-cni-multus\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872313 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-var-lib-kubelet\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872332 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-etc-kubernetes\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872353 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-config\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872370 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-env-overrides\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.872387 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-run-netns\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.880303 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.954407 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.973658 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-var-lib-openvswitch\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.973931 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-node-log\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974024 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-os-release\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974140 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-daemon-config\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974239 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6c9cd07c-9f4b-41bb-b29b-db9411c64336-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974328 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-socket-dir-parent\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974417 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/70a89e1d-ff2b-4918-bae1-2f79d18396e8-mcd-auth-proxy-config\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974502 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-ovn\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974589 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spfrq\" (UniqueName: \"kubernetes.io/projected/52ba00d9-b0ef-4496-a6b8-e170f405c592-kube-api-access-spfrq\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974689 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-systemd-units\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974775 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-cnibin\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974864 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-netd\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.974947 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-cnibin\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.975114 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-log-socket\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.975959 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/014636cb-e768-4554-9556-460db2ebfdcb-ovn-node-metrics-cert\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976066 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-var-lib-cni-bin\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976153 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976240 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-netns\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976325 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-system-cni-dir\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976421 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-cni-dir\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976508 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-conf-dir\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976602 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-ovn-kubernetes\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976701 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-slash\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976785 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-systemd\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976873 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-kubelet\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.976958 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/70a89e1d-ff2b-4918-bae1-2f79d18396e8-rootfs\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977061 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzchj\" (UniqueName: \"kubernetes.io/projected/70a89e1d-ff2b-4918-bae1-2f79d18396e8-kube-api-access-zzchj\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977401 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-hostroot\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977495 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49h5j\" (UniqueName: \"kubernetes.io/projected/014636cb-e768-4554-9556-460db2ebfdcb-kube-api-access-49h5j\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977583 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-systemd-units\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.973980 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-node-log\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977554 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-cnibin\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977583 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-run-multus-certs\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977656 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-openvswitch\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977675 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-script-lib\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977695 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-system-cni-dir\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977714 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6c9cd07c-9f4b-41bb-b29b-db9411c64336-cni-binary-copy\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977733 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-var-lib-cni-multus\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977751 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-var-lib-kubelet\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977767 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-etc-kubernetes\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977784 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-os-release\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977800 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-etc-openvswitch\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977816 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-bin\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977832 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-env-overrides\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977848 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-run-netns\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977866 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-config\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977884 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977901 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-run-k8s-cni-cncf-io\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977921 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phjqt\" (UniqueName: \"kubernetes.io/projected/6c9cd07c-9f4b-41bb-b29b-db9411c64336-kube-api-access-phjqt\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977938 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/70a89e1d-ff2b-4918-bae1-2f79d18396e8-proxy-tls\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.977965 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/52ba00d9-b0ef-4496-a6b8-e170f405c592-cni-binary-copy\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.978433 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-run-multus-certs\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.978543 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-log-socket\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:10 crc kubenswrapper[4746]: I1211 09:54:10.982258 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-ovn\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.005440 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-var-lib-cni-bin\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006444 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-cnibin\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006479 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-netns\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006446 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-kubelet\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006489 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-hostroot\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006514 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-etc-openvswitch\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006530 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-os-release\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006577 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-conf-dir\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006581 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/70a89e1d-ff2b-4918-bae1-2f79d18396e8-rootfs\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006552 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-cni-dir\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006608 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-ovn-kubernetes\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006546 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-bin\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006622 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-netd\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006549 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-system-cni-dir\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006637 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-slash\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006661 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-openvswitch\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006672 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-systemd\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006699 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006723 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-run-netns\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:10.977203 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-socket-dir-parent\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.006982 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-var-lib-cni-multus\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.007066 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-system-cni-dir\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:10.977255 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-os-release\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.007231 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-var-lib-kubelet\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.007269 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-etc-kubernetes\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:10.973948 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-var-lib-openvswitch\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.007539 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/52ba00d9-b0ef-4496-a6b8-e170f405c592-host-run-k8s-cni-cncf-io\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.015262 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-6jfmh" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.064838 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6c9cd07c-9f4b-41bb-b29b-db9411c64336-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.069075 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/70a89e1d-ff2b-4918-bae1-2f79d18396e8-proxy-tls\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.080551 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-script-lib\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.082224 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-env-overrides\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.094282 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/70a89e1d-ff2b-4918-bae1-2f79d18396e8-mcd-auth-proxy-config\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.094606 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.113214 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.129212 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.140876 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-config\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.141249 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/52ba00d9-b0ef-4496-a6b8-e170f405c592-cni-binary-copy\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.143087 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6c9cd07c-9f4b-41bb-b29b-db9411c64336-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.143148 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phjqt\" (UniqueName: \"kubernetes.io/projected/6c9cd07c-9f4b-41bb-b29b-db9411c64336-kube-api-access-phjqt\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.143390 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/52ba00d9-b0ef-4496-a6b8-e170f405c592-multus-daemon-config\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.144145 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.148202 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/014636cb-e768-4554-9556-460db2ebfdcb-ovn-node-metrics-cert\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.162680 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6c9cd07c-9f4b-41bb-b29b-db9411c64336-cni-binary-copy\") pod \"multus-additional-cni-plugins-vtfvl\" (UID: \"6c9cd07c-9f4b-41bb-b29b-db9411c64336\") " pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.165258 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49h5j\" (UniqueName: \"kubernetes.io/projected/014636cb-e768-4554-9556-460db2ebfdcb-kube-api-access-49h5j\") pod \"ovnkube-node-w2s5z\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.165697 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spfrq\" (UniqueName: \"kubernetes.io/projected/52ba00d9-b0ef-4496-a6b8-e170f405c592-kube-api-access-spfrq\") pod \"multus-r622c\" (UID: \"52ba00d9-b0ef-4496-a6b8-e170f405c592\") " pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.173108 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzchj\" (UniqueName: \"kubernetes.io/projected/70a89e1d-ff2b-4918-bae1-2f79d18396e8-kube-api-access-zzchj\") pod \"machine-config-daemon-mxwk6\" (UID: \"70a89e1d-ff2b-4918-bae1-2f79d18396e8\") " pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.215421 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.235675 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.252895 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.286323 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.313747 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.328499 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.341789 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.351695 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.371986 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.384202 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.388291 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:54:11 crc kubenswrapper[4746]: W1211 09:54:11.397938 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70a89e1d_ff2b_4918_bae1_2f79d18396e8.slice/crio-042b50185ca5f49366a64d25fe6191ae238cbca8d6cd55849f74b91e83654245 WatchSource:0}: Error finding container 042b50185ca5f49366a64d25fe6191ae238cbca8d6cd55849f74b91e83654245: Status 404 returned error can't find the container with id 042b50185ca5f49366a64d25fe6191ae238cbca8d6cd55849f74b91e83654245 Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.400231 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.408705 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.416827 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.428607 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.442730 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: W1211 09:54:11.447842 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod014636cb_e768_4554_9556_460db2ebfdcb.slice/crio-4ba82b00cbeb1aa1aa89f2f235878a2cd84f3232a6850548f1fbaab331adc5c0 WatchSource:0}: Error finding container 4ba82b00cbeb1aa1aa89f2f235878a2cd84f3232a6850548f1fbaab331adc5c0: Status 404 returned error can't find the container with id 4ba82b00cbeb1aa1aa89f2f235878a2cd84f3232a6850548f1fbaab331adc5c0 Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.449729 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-r622c" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.462425 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: W1211 09:54:11.472612 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52ba00d9_b0ef_4496_a6b8_e170f405c592.slice/crio-28c593713783117156ce3f698e88efc371260084668537dd09b54d743926e04b WatchSource:0}: Error finding container 28c593713783117156ce3f698e88efc371260084668537dd09b54d743926e04b: Status 404 returned error can't find the container with id 28c593713783117156ce3f698e88efc371260084668537dd09b54d743926e04b Dec 11 09:54:11 crc kubenswrapper[4746]: W1211 09:54:11.473694 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c9cd07c_9f4b_41bb_b29b_db9411c64336.slice/crio-de671d31dc201e776edcc4f29f62b1a43c67e5a9c5168079c8837008209dc9d2 WatchSource:0}: Error finding container de671d31dc201e776edcc4f29f62b1a43c67e5a9c5168079c8837008209dc9d2: Status 404 returned error can't find the container with id de671d31dc201e776edcc4f29f62b1a43c67e5a9c5168079c8837008209dc9d2 Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.479375 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.509157 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.766081 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3"} Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.770698 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5"} Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.770739 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"042b50185ca5f49366a64d25fe6191ae238cbca8d6cd55849f74b91e83654245"} Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.772845 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6jfmh" event={"ID":"58f9b204-f3da-4add-be36-d1be33351d97","Type":"ContainerStarted","Data":"2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db"} Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.772888 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-6jfmh" event={"ID":"58f9b204-f3da-4add-be36-d1be33351d97","Type":"ContainerStarted","Data":"42563bb8e66269deff2e477dfc62dc030f1fc7e015b6e5d94a3a55bfeaa09866"} Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.773584 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" event={"ID":"6c9cd07c-9f4b-41bb-b29b-db9411c64336","Type":"ContainerStarted","Data":"de671d31dc201e776edcc4f29f62b1a43c67e5a9c5168079c8837008209dc9d2"} Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.776841 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r622c" event={"ID":"52ba00d9-b0ef-4496-a6b8-e170f405c592","Type":"ContainerStarted","Data":"0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade"} Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.776932 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r622c" event={"ID":"52ba00d9-b0ef-4496-a6b8-e170f405c592","Type":"ContainerStarted","Data":"28c593713783117156ce3f698e88efc371260084668537dd09b54d743926e04b"} Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.778119 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0" exitCode=0 Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.778158 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0"} Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.778183 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"4ba82b00cbeb1aa1aa89f2f235878a2cd84f3232a6850548f1fbaab331adc5c0"} Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.783306 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.798818 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.816444 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.832702 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.841778 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.857489 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.868385 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.878741 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.893567 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.904275 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.916145 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.927458 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.937630 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.950301 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.960583 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:11 crc kubenswrapper[4746]: I1211 09:54:11.985557 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:11Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.004856 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.029271 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.055788 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.071101 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.084495 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.097772 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.114364 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.125307 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.141645 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.153900 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.200223 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.200416 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:54:16.200389815 +0000 UTC m=+29.060253118 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.301379 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.301437 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.301463 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.301489 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.301613 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.301629 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.301639 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.301685 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:16.30166976 +0000 UTC m=+29.161533083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.302007 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.302061 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:16.302037281 +0000 UTC m=+29.161900604 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.302099 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.302123 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:16.302116423 +0000 UTC m=+29.161979736 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.302175 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.302185 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.302194 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.302217 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:16.302209736 +0000 UTC m=+29.162073049 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.629884 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.629915 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.629935 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.630351 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.630466 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:12 crc kubenswrapper[4746]: E1211 09:54:12.630550 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.784838 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503"} Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.784916 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2"} Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.784928 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578"} Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.784939 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2"} Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.784947 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda"} Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.786609 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e"} Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.788209 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c9cd07c-9f4b-41bb-b29b-db9411c64336" containerID="1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7" exitCode=0 Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.788255 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" event={"ID":"6c9cd07c-9f4b-41bb-b29b-db9411c64336","Type":"ContainerDied","Data":"1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7"} Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.812916 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.830525 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.842848 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.863361 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.878958 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.892191 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.909565 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.922641 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.937381 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.951404 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.969172 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:12 crc kubenswrapper[4746]: I1211 09:54:12.981666 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.000212 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.013420 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.026783 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.042973 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.055027 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.067074 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.082278 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.096339 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.107327 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.214918 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.231239 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.246826 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.263868 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.278716 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.590468 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kbcxr"] Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.591062 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.592878 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.593100 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.593348 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.594259 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.602865 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.615026 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.626982 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.639938 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.648503 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.666293 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.676526 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.686874 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.701892 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.715959 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.718363 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e883e78-75ce-40ce-90f5-41d1e355ef02-host\") pod \"node-ca-kbcxr\" (UID: \"5e883e78-75ce-40ce-90f5-41d1e355ef02\") " pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.718400 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsdjk\" (UniqueName: \"kubernetes.io/projected/5e883e78-75ce-40ce-90f5-41d1e355ef02-kube-api-access-vsdjk\") pod \"node-ca-kbcxr\" (UID: \"5e883e78-75ce-40ce-90f5-41d1e355ef02\") " pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.718426 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5e883e78-75ce-40ce-90f5-41d1e355ef02-serviceca\") pod \"node-ca-kbcxr\" (UID: \"5e883e78-75ce-40ce-90f5-41d1e355ef02\") " pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.729596 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.747748 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.759342 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.770382 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.794616 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" event={"ID":"6c9cd07c-9f4b-41bb-b29b-db9411c64336","Type":"ContainerStarted","Data":"de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b"} Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.798840 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045"} Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.818828 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.832917 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.840984 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsdjk\" (UniqueName: \"kubernetes.io/projected/5e883e78-75ce-40ce-90f5-41d1e355ef02-kube-api-access-vsdjk\") pod \"node-ca-kbcxr\" (UID: \"5e883e78-75ce-40ce-90f5-41d1e355ef02\") " pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.841070 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e883e78-75ce-40ce-90f5-41d1e355ef02-host\") pod \"node-ca-kbcxr\" (UID: \"5e883e78-75ce-40ce-90f5-41d1e355ef02\") " pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.841100 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5e883e78-75ce-40ce-90f5-41d1e355ef02-serviceca\") pod \"node-ca-kbcxr\" (UID: \"5e883e78-75ce-40ce-90f5-41d1e355ef02\") " pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.842427 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5e883e78-75ce-40ce-90f5-41d1e355ef02-serviceca\") pod \"node-ca-kbcxr\" (UID: \"5e883e78-75ce-40ce-90f5-41d1e355ef02\") " pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.842552 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5e883e78-75ce-40ce-90f5-41d1e355ef02-host\") pod \"node-ca-kbcxr\" (UID: \"5e883e78-75ce-40ce-90f5-41d1e355ef02\") " pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.852759 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.861903 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsdjk\" (UniqueName: \"kubernetes.io/projected/5e883e78-75ce-40ce-90f5-41d1e355ef02-kube-api-access-vsdjk\") pod \"node-ca-kbcxr\" (UID: \"5e883e78-75ce-40ce-90f5-41d1e355ef02\") " pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.868401 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.894287 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.913003 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.925312 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.936223 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.949380 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.964229 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.976760 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:13 crc kubenswrapper[4746]: I1211 09:54:13.988332 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:13.999951 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:13Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.009472 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.080175 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kbcxr" Dec 11 09:54:14 crc kubenswrapper[4746]: W1211 09:54:14.116645 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e883e78_75ce_40ce_90f5_41d1e355ef02.slice/crio-f40f574197950e79ec94d70018a861947e09d92d3864b929dccacc1bf81ae2d0 WatchSource:0}: Error finding container f40f574197950e79ec94d70018a861947e09d92d3864b929dccacc1bf81ae2d0: Status 404 returned error can't find the container with id f40f574197950e79ec94d70018a861947e09d92d3864b929dccacc1bf81ae2d0 Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.277398 4746 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.281233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.281269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.281281 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.282459 4746 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.288730 4746 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.289002 4746 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.290988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.291012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.291023 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.291551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.291566 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: E1211 09:54:14.326223 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.329867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.329904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.329914 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.329930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.329940 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: E1211 09:54:14.346480 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.349486 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.349519 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.349531 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.349547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.349558 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: E1211 09:54:14.363694 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.367204 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.367245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.367254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.367269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.367278 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: E1211 09:54:14.380519 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.383697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.383735 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.383748 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.383764 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.383777 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: E1211 09:54:14.397950 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: E1211 09:54:14.398124 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.399563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.399591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.399603 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.399620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.399632 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.502321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.502369 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.502381 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.502399 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.502411 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.605065 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.605111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.605120 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.605134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.605144 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.629453 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.629489 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.629541 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:14 crc kubenswrapper[4746]: E1211 09:54:14.629644 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:14 crc kubenswrapper[4746]: E1211 09:54:14.629737 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:14 crc kubenswrapper[4746]: E1211 09:54:14.629849 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.707500 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.707546 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.707558 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.707575 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.707588 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.801804 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kbcxr" event={"ID":"5e883e78-75ce-40ce-90f5-41d1e355ef02","Type":"ContainerStarted","Data":"57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051"} Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.801871 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kbcxr" event={"ID":"5e883e78-75ce-40ce-90f5-41d1e355ef02","Type":"ContainerStarted","Data":"f40f574197950e79ec94d70018a861947e09d92d3864b929dccacc1bf81ae2d0"} Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.803356 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c9cd07c-9f4b-41bb-b29b-db9411c64336" containerID="de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b" exitCode=0 Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.803384 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" event={"ID":"6c9cd07c-9f4b-41bb-b29b-db9411c64336","Type":"ContainerDied","Data":"de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b"} Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.809597 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.809631 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.809641 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.809654 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.809666 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.817319 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.837635 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.855069 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.865682 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.883726 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.898563 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.908706 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.912621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.912656 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.912667 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.912684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.912697 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:14Z","lastTransitionTime":"2025-12-11T09:54:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.923187 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.936195 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.949865 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.963822 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.978328 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:14 crc kubenswrapper[4746]: I1211 09:54:14.993982 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:14Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.004077 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.014991 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.015391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.015420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.015430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.015445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.015455 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:15Z","lastTransitionTime":"2025-12-11T09:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.034263 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.048473 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.060152 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.072736 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.085769 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.099011 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.112635 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.117500 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.117637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.117709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.117769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.117823 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:15Z","lastTransitionTime":"2025-12-11T09:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.146509 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.165195 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.176300 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.187660 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.199445 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.215461 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.220407 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.220622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.220726 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.220830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.220918 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:15Z","lastTransitionTime":"2025-12-11T09:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.327355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.327394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.327402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.327417 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.327429 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:15Z","lastTransitionTime":"2025-12-11T09:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.429801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.429854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.429871 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.429895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.429914 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:15Z","lastTransitionTime":"2025-12-11T09:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.532758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.532792 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.532802 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.532818 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.532829 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:15Z","lastTransitionTime":"2025-12-11T09:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.634853 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.634893 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.634905 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.634921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.634932 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:15Z","lastTransitionTime":"2025-12-11T09:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.737451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.737519 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.737537 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.737563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.737582 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:15Z","lastTransitionTime":"2025-12-11T09:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.807965 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.809163 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c9cd07c-9f4b-41bb-b29b-db9411c64336" containerID="8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86" exitCode=0 Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.809184 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" event={"ID":"6c9cd07c-9f4b-41bb-b29b-db9411c64336","Type":"ContainerDied","Data":"8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.833819 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.839263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.839367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.839463 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.839547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.839604 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:15Z","lastTransitionTime":"2025-12-11T09:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.846876 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.858128 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.891417 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.913704 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.924606 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.939299 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.941620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.941645 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.941654 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.941667 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.941692 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:15Z","lastTransitionTime":"2025-12-11T09:54:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.949886 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.962085 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.971991 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.983951 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:15 crc kubenswrapper[4746]: I1211 09:54:15.993785 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:15Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.002904 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.011293 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.051277 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.051558 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.051567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.051582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.051592 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:16Z","lastTransitionTime":"2025-12-11T09:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.159846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.159877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.159885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.159897 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.159906 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:16Z","lastTransitionTime":"2025-12-11T09:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.262306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.262358 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.262370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.262389 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.262402 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:16Z","lastTransitionTime":"2025-12-11T09:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.289252 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.289594 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:54:24.28954705 +0000 UTC m=+37.149410403 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.364147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.364185 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.364195 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.364208 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.364218 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:16Z","lastTransitionTime":"2025-12-11T09:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.390417 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.390480 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.390515 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.390546 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390589 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390628 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390636 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390650 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390658 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390660 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390661 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390668 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390711 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:24.390684292 +0000 UTC m=+37.250547645 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390735 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:24.390723463 +0000 UTC m=+37.250586786 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390750 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:24.390743573 +0000 UTC m=+37.250606896 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.390767 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:24.390757974 +0000 UTC m=+37.250621297 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.467188 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.467239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.467247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.467263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.467272 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:16Z","lastTransitionTime":"2025-12-11T09:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.569199 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.569247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.569258 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.569277 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.569288 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:16Z","lastTransitionTime":"2025-12-11T09:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.630222 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.630469 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.630765 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.630857 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.631184 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:16 crc kubenswrapper[4746]: E1211 09:54:16.631306 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.672730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.672799 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.672818 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.672845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.672863 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:16Z","lastTransitionTime":"2025-12-11T09:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.775094 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.775133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.775142 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.775155 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.775164 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:16Z","lastTransitionTime":"2025-12-11T09:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.817513 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c9cd07c-9f4b-41bb-b29b-db9411c64336" containerID="46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa" exitCode=0 Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.817587 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" event={"ID":"6c9cd07c-9f4b-41bb-b29b-db9411c64336","Type":"ContainerDied","Data":"46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.838956 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.861877 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.876428 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.877612 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.877643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.877654 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.877689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.877702 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:16Z","lastTransitionTime":"2025-12-11T09:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.895655 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.910628 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.923739 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.941737 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.957811 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.975007 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.980877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.980915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.980926 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.980945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.980956 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:16Z","lastTransitionTime":"2025-12-11T09:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:16 crc kubenswrapper[4746]: I1211 09:54:16.990190 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:16Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.004183 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.020857 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.033938 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.049992 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.083247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.083287 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.083297 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.083313 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.083324 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:17Z","lastTransitionTime":"2025-12-11T09:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.186186 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.186665 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.186692 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.186721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.186743 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:17Z","lastTransitionTime":"2025-12-11T09:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.290507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.290567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.290586 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.290607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.290621 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:17Z","lastTransitionTime":"2025-12-11T09:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.392669 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.392702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.392711 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.392725 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.392734 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:17Z","lastTransitionTime":"2025-12-11T09:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.495193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.495356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.495445 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.495526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.495609 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:17Z","lastTransitionTime":"2025-12-11T09:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.598215 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.598254 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.598265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.598282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.598293 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:17Z","lastTransitionTime":"2025-12-11T09:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.646026 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.657392 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.674900 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.688768 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.700025 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.700100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.700115 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.700136 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.700152 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:17Z","lastTransitionTime":"2025-12-11T09:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.703544 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.721857 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.735438 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.749682 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.761151 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.779962 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.790236 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.800632 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.802020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.802075 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.802089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.802116 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.802129 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:17Z","lastTransitionTime":"2025-12-11T09:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.811681 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.827415 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.827639 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.829473 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.830585 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" event={"ID":"6c9cd07c-9f4b-41bb-b29b-db9411c64336","Type":"ContainerStarted","Data":"7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.842807 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.855595 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.870293 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.870863 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.880460 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.893922 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.903574 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.904248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.904281 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.904293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.904309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.904320 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:17Z","lastTransitionTime":"2025-12-11T09:54:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.921022 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.932529 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.943494 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.955641 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.967569 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.978036 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.988759 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:17 crc kubenswrapper[4746]: I1211 09:54:17.998680 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:17Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.006198 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.006260 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.006283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.006305 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.006320 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:18Z","lastTransitionTime":"2025-12-11T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.011466 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.028061 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.038908 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.048963 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.061879 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.084868 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.097139 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.107874 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.108396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.108440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.108449 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.108464 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.108473 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:18Z","lastTransitionTime":"2025-12-11T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.122903 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.134438 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.142703 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.158462 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.168093 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.176591 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.267281 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.267314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.267340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.267354 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.267363 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:18Z","lastTransitionTime":"2025-12-11T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.374777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.374820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.374837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.374858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.374875 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:18Z","lastTransitionTime":"2025-12-11T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.554388 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.554431 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.554442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.554457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.554468 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:18Z","lastTransitionTime":"2025-12-11T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.722267 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.722297 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.722304 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:18 crc kubenswrapper[4746]: E1211 09:54:18.722375 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:18 crc kubenswrapper[4746]: E1211 09:54:18.722472 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:18 crc kubenswrapper[4746]: E1211 09:54:18.722534 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.726170 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.726202 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.726213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.726226 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.726249 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:18Z","lastTransitionTime":"2025-12-11T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.828236 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.828273 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.828282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.828296 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.828304 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:18Z","lastTransitionTime":"2025-12-11T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.832323 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.832983 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.857487 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.871706 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.883462 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.895286 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.904702 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.917635 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.928866 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.930120 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.930158 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.930168 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.930185 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.930198 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:18Z","lastTransitionTime":"2025-12-11T09:54:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.946989 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.961775 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.973038 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.983567 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:18 crc kubenswrapper[4746]: I1211 09:54:18.994915 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:18Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.005150 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.015146 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.024244 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.031582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.031611 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.031622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.031640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.031652 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:19Z","lastTransitionTime":"2025-12-11T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.133728 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.133791 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.133811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.133838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.133861 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:19Z","lastTransitionTime":"2025-12-11T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.235777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.235828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.235839 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.235855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.235864 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:19Z","lastTransitionTime":"2025-12-11T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.337854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.337894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.337918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.337935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.337945 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:19Z","lastTransitionTime":"2025-12-11T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.439960 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.440011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.440022 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.440059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.440072 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:19Z","lastTransitionTime":"2025-12-11T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.542202 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.542256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.542272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.542294 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.542311 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:19Z","lastTransitionTime":"2025-12-11T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.644551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.644605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.644624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.644643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.644657 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:19Z","lastTransitionTime":"2025-12-11T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.747073 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.747124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.747137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.747156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.747168 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:19Z","lastTransitionTime":"2025-12-11T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.837963 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c9cd07c-9f4b-41bb-b29b-db9411c64336" containerID="7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736" exitCode=0 Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.838075 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" event={"ID":"6c9cd07c-9f4b-41bb-b29b-db9411c64336","Type":"ContainerDied","Data":"7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.838185 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.849729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.849800 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.849823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.849854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.849877 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:19Z","lastTransitionTime":"2025-12-11T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.858911 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.871556 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.894491 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.928676 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.952922 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.952961 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.952971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.952986 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.952997 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:19Z","lastTransitionTime":"2025-12-11T09:54:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.953780 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.969816 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.985930 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:19 crc kubenswrapper[4746]: I1211 09:54:19.997065 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:19Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.007054 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.019713 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.030582 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.042451 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.055142 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.055176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.055185 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.055198 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.055206 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:20Z","lastTransitionTime":"2025-12-11T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.056018 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.071819 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.156719 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.156760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.156776 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.156791 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.156801 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:20Z","lastTransitionTime":"2025-12-11T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.259700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.259781 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.259808 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.259842 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.259868 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:20Z","lastTransitionTime":"2025-12-11T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.362579 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.362656 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.362666 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.362681 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.362692 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:20Z","lastTransitionTime":"2025-12-11T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.464876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.464939 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.464956 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.464980 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.464995 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:20Z","lastTransitionTime":"2025-12-11T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.567822 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.567890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.567903 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.567939 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.567954 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:20Z","lastTransitionTime":"2025-12-11T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.630290 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.630298 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:20 crc kubenswrapper[4746]: E1211 09:54:20.630484 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.630318 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:20 crc kubenswrapper[4746]: E1211 09:54:20.630577 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:20 crc kubenswrapper[4746]: E1211 09:54:20.630590 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.670999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.671072 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.671084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.671100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.671113 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:20Z","lastTransitionTime":"2025-12-11T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.773567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.773622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.773635 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.773655 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.773668 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:20Z","lastTransitionTime":"2025-12-11T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.843755 4746 generic.go:334] "Generic (PLEG): container finished" podID="6c9cd07c-9f4b-41bb-b29b-db9411c64336" containerID="e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb" exitCode=0 Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.843787 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" event={"ID":"6c9cd07c-9f4b-41bb-b29b-db9411c64336","Type":"ContainerDied","Data":"e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.843888 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.858314 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.872806 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.875952 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.875981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.875993 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.876008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.876019 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:20Z","lastTransitionTime":"2025-12-11T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.888273 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.898360 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.912715 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.923718 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.944002 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.958263 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.970923 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.979576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.979607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.979617 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.979630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.979639 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:20Z","lastTransitionTime":"2025-12-11T09:54:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:20 crc kubenswrapper[4746]: I1211 09:54:20.985912 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:20Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.080295 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.095806 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.095843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.095853 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.095868 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.095879 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:21Z","lastTransitionTime":"2025-12-11T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.104840 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.125305 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.139934 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.197703 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.197758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.197771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.197788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.198125 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:21Z","lastTransitionTime":"2025-12-11T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.301200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.301246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.301259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.301297 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.301308 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:21Z","lastTransitionTime":"2025-12-11T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.403902 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.403974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.403985 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.404016 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.404035 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:21Z","lastTransitionTime":"2025-12-11T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.506443 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.506492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.506506 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.506524 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.506536 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:21Z","lastTransitionTime":"2025-12-11T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.608869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.608906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.608916 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.608932 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.608943 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:21Z","lastTransitionTime":"2025-12-11T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.711552 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.711589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.711598 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.711613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.711622 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:21Z","lastTransitionTime":"2025-12-11T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.813477 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.813515 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.813524 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.813537 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.813546 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:21Z","lastTransitionTime":"2025-12-11T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.850766 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" event={"ID":"6c9cd07c-9f4b-41bb-b29b-db9411c64336","Type":"ContainerStarted","Data":"8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec"} Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.869217 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.884880 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.895654 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.905456 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.915998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.916034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.916067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.916084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.916096 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:21Z","lastTransitionTime":"2025-12-11T09:54:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.916416 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.932119 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.945663 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.958539 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.971921 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.984347 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:21 crc kubenswrapper[4746]: I1211 09:54:21.994150 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:21Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.006021 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.014685 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.017845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.017874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.017885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.017899 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.017909 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:22Z","lastTransitionTime":"2025-12-11T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.032832 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.121485 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.121551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.121572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.121590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.121602 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:22Z","lastTransitionTime":"2025-12-11T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.224218 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.224263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.224272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.224284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.224295 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:22Z","lastTransitionTime":"2025-12-11T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.326660 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.326699 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.326708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.326722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.326731 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:22Z","lastTransitionTime":"2025-12-11T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.430147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.430186 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.430199 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.430216 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.430228 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:22Z","lastTransitionTime":"2025-12-11T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.533959 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.534004 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.534017 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.534034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.534057 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:22Z","lastTransitionTime":"2025-12-11T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.629632 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.629677 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.629722 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:22 crc kubenswrapper[4746]: E1211 09:54:22.629760 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:22 crc kubenswrapper[4746]: E1211 09:54:22.629859 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:22 crc kubenswrapper[4746]: E1211 09:54:22.629935 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.635965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.636024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.636077 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.636105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.636131 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:22Z","lastTransitionTime":"2025-12-11T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.738172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.738214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.738226 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.738241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.738252 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:22Z","lastTransitionTime":"2025-12-11T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.841093 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.841132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.841150 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.841173 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.841188 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:22Z","lastTransitionTime":"2025-12-11T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.855963 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/0.log" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.858329 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d" exitCode=1 Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.858384 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.859080 4746 scope.go:117] "RemoveContainer" containerID="f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.872419 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.884486 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.898346 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.909079 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.927526 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 09:54:21.912134 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 09:54:21.912152 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 09:54:21.912163 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 09:54:21.912181 5974 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 09:54:21.912193 5974 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 09:54:21.912205 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 09:54:21.912204 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 09:54:21.912196 5974 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 09:54:21.912228 5974 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 09:54:21.912228 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 09:54:21.912236 5974 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 09:54:21.912282 5974 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1211 09:54:21.912305 5974 factory.go:656] Stopping watch factory\\\\nI1211 09:54:21.912315 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:21.912330 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.940267 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.943575 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.943622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.943634 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.943654 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.943667 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:22Z","lastTransitionTime":"2025-12-11T09:54:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.952881 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.967025 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.976840 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:22 crc kubenswrapper[4746]: I1211 09:54:22.989897 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:22Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.004685 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.021760 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.034975 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.046114 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.046161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.046170 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.046185 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.046195 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:23Z","lastTransitionTime":"2025-12-11T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.049453 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.148457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.148492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.148501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.148513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.148541 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:23Z","lastTransitionTime":"2025-12-11T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.251743 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.251780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.251789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.251805 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.251816 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:23Z","lastTransitionTime":"2025-12-11T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.354302 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.354345 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.354356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.354372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.354383 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:23Z","lastTransitionTime":"2025-12-11T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.457622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.457665 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.457675 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.457688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.457697 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:23Z","lastTransitionTime":"2025-12-11T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.560301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.560350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.560365 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.560388 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.560403 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:23Z","lastTransitionTime":"2025-12-11T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.666259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.666304 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.666315 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.666332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.666344 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:23Z","lastTransitionTime":"2025-12-11T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.768360 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.768403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.768414 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.768430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.768442 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:23Z","lastTransitionTime":"2025-12-11T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.785517 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv"] Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.785947 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.788986 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.789244 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.789938 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/770e7df6-7594-4c7a-8a3a-a7948e532da7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.789996 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/770e7df6-7594-4c7a-8a3a-a7948e532da7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.790022 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zltsx\" (UniqueName: \"kubernetes.io/projected/770e7df6-7594-4c7a-8a3a-a7948e532da7-kube-api-access-zltsx\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.790109 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/770e7df6-7594-4c7a-8a3a-a7948e532da7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.806959 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.821958 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.846135 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 09:54:21.912134 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 09:54:21.912152 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 09:54:21.912163 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 09:54:21.912181 5974 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 09:54:21.912193 5974 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 09:54:21.912205 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 09:54:21.912204 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 09:54:21.912196 5974 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 09:54:21.912228 5974 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 09:54:21.912228 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 09:54:21.912236 5974 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 09:54:21.912282 5974 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1211 09:54:21.912305 5974 factory.go:656] Stopping watch factory\\\\nI1211 09:54:21.912315 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:21.912330 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.862067 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.863861 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/0.log" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.866693 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.866847 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.873595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.873637 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.873647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.873661 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.873671 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:23Z","lastTransitionTime":"2025-12-11T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.874977 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.886768 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.891555 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/770e7df6-7594-4c7a-8a3a-a7948e532da7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.891605 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/770e7df6-7594-4c7a-8a3a-a7948e532da7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.891633 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zltsx\" (UniqueName: \"kubernetes.io/projected/770e7df6-7594-4c7a-8a3a-a7948e532da7-kube-api-access-zltsx\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.891697 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/770e7df6-7594-4c7a-8a3a-a7948e532da7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.892717 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/770e7df6-7594-4c7a-8a3a-a7948e532da7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.892763 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/770e7df6-7594-4c7a-8a3a-a7948e532da7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.897543 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/770e7df6-7594-4c7a-8a3a-a7948e532da7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.898925 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.907111 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zltsx\" (UniqueName: \"kubernetes.io/projected/770e7df6-7594-4c7a-8a3a-a7948e532da7-kube-api-access-zltsx\") pod \"ovnkube-control-plane-749d76644c-tgzfv\" (UID: \"770e7df6-7594-4c7a-8a3a-a7948e532da7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.911064 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.923159 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.934856 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.946440 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.957649 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.969933 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.976293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.976330 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.976340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.976354 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.976363 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:23Z","lastTransitionTime":"2025-12-11T09:54:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.984604 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:23 crc kubenswrapper[4746]: I1211 09:54:23.998571 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:23Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.010602 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.023827 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.038480 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.058431 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 09:54:21.912134 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 09:54:21.912152 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 09:54:21.912163 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 09:54:21.912181 5974 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 09:54:21.912193 5974 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 09:54:21.912205 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 09:54:21.912204 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 09:54:21.912196 5974 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 09:54:21.912228 5974 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 09:54:21.912228 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 09:54:21.912236 5974 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 09:54:21.912282 5974 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1211 09:54:21.912305 5974 factory.go:656] Stopping watch factory\\\\nI1211 09:54:21.912315 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:21.912330 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.073035 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.079060 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.079096 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.079107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.079123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.079135 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.082829 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.096267 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.098763 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.113808 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.130598 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.150099 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.166459 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.180719 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.180754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.180763 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.180777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.180786 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.190309 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.212558 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.232206 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.243518 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.282622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.282672 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.282683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.282698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.282708 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.295380 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.295583 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:54:40.295568371 +0000 UTC m=+53.155431674 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.400519 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.400581 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.400611 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.400644 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.400782 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.400838 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:40.400820765 +0000 UTC m=+53.260684078 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.401447 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.401483 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:40.401472754 +0000 UTC m=+53.261336067 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.401555 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.401569 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.401582 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.401594 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.401615 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.401628 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.401618 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:40.401609888 +0000 UTC m=+53.261473201 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.401677 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:40.40166881 +0000 UTC m=+53.261532223 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.403528 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.403562 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.403572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.403585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.403596 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.492520 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xh6zv"] Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.493069 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.493129 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.512890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.513148 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.513264 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.513377 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.513460 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.523279 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.538029 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.549502 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.559506 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.569641 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.570193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.570225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.570236 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.570251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.570261 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.580632 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.582835 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.588762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.588991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.589119 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.589231 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.589387 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.592003 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.600827 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.602921 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.604312 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.604366 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.604380 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.604398 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.604409 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.611626 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.611668 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2lr\" (UniqueName: \"kubernetes.io/projected/2a55e871-062f-43fd-a1e2-b2296474f4f3-kube-api-access-8z2lr\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.614643 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.616399 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.619267 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.619298 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.619308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.619322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.619332 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.629851 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.629829 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.629973 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.630097 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.630097 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.630196 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.630279 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.632135 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.639503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.639550 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.639563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.639580 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.639592 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.649418 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.655420 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.655595 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.656948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.656984 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.656994 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.657010 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.657018 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.660871 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.678467 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 09:54:21.912134 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 09:54:21.912152 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 09:54:21.912163 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 09:54:21.912181 5974 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 09:54:21.912193 5974 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 09:54:21.912205 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 09:54:21.912204 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 09:54:21.912196 5974 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 09:54:21.912228 5974 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 09:54:21.912228 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 09:54:21.912236 5974 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 09:54:21.912282 5974 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1211 09:54:21.912305 5974 factory.go:656] Stopping watch factory\\\\nI1211 09:54:21.912315 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:21.912330 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.693263 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.702021 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.712387 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.712440 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z2lr\" (UniqueName: \"kubernetes.io/projected/2a55e871-062f-43fd-a1e2-b2296474f4f3-kube-api-access-8z2lr\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.712547 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.712624 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs podName:2a55e871-062f-43fd-a1e2-b2296474f4f3 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:25.212606509 +0000 UTC m=+38.072469822 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs") pod "network-metrics-daemon-xh6zv" (UID: "2a55e871-062f-43fd-a1e2-b2296474f4f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.714465 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.729456 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z2lr\" (UniqueName: \"kubernetes.io/projected/2a55e871-062f-43fd-a1e2-b2296474f4f3-kube-api-access-8z2lr\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.759246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.759304 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.759314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.759327 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.759336 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.861370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.861402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.861412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.861424 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.861433 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.870612 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/1.log" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.871154 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/0.log" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.873229 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa" exitCode=1 Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.873284 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.873317 4746 scope.go:117] "RemoveContainer" containerID="f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.874004 4746 scope.go:117] "RemoveContainer" containerID="ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa" Dec 11 09:54:24 crc kubenswrapper[4746]: E1211 09:54:24.874207 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.875077 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" event={"ID":"770e7df6-7594-4c7a-8a3a-a7948e532da7","Type":"ContainerStarted","Data":"51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.875115 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" event={"ID":"770e7df6-7594-4c7a-8a3a-a7948e532da7","Type":"ContainerStarted","Data":"8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.875125 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" event={"ID":"770e7df6-7594-4c7a-8a3a-a7948e532da7","Type":"ContainerStarted","Data":"5e352a2a74bc8d133bb604e6368ba8eb7e5fab36fa32b94e4d8ab301d77107c7"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.887808 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.902928 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.916197 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.927010 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.938745 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.952580 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.963930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.963962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.963971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.963985 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.963995 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:24Z","lastTransitionTime":"2025-12-11T09:54:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.969198 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.979760 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:24 crc kubenswrapper[4746]: I1211 09:54:24.993097 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:24Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.002090 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.020551 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 09:54:21.912134 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 09:54:21.912152 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 09:54:21.912163 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 09:54:21.912181 5974 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 09:54:21.912193 5974 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 09:54:21.912205 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 09:54:21.912204 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 09:54:21.912196 5974 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 09:54:21.912228 5974 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 09:54:21.912228 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 09:54:21.912236 5974 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 09:54:21.912282 5974 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1211 09:54:21.912305 5974 factory.go:656] Stopping watch factory\\\\nI1211 09:54:21.912315 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:21.912330 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607269 6169 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vtfvl in node crc\\\\nI1211 09:54:24.607274 6169 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl after 0 failed attempt(s)\\\\nI1211 09:54:24.607278 6169 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607290 6169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1211 09:54:24.607328 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nF1211 09:54:24.607339 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.037513 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.048617 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.060246 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.066581 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.066798 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.066867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.066927 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.066989 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:25Z","lastTransitionTime":"2025-12-11T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.073692 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.086284 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.099972 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.112704 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.122190 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.132255 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.143263 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.156100 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.169840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.170081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.170182 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.170266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.170355 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:25Z","lastTransitionTime":"2025-12-11T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.170870 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.183609 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.192859 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.214286 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 09:54:21.912134 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 09:54:21.912152 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 09:54:21.912163 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 09:54:21.912181 5974 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 09:54:21.912193 5974 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 09:54:21.912205 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 09:54:21.912204 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 09:54:21.912196 5974 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 09:54:21.912228 5974 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 09:54:21.912228 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 09:54:21.912236 5974 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 09:54:21.912282 5974 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1211 09:54:21.912305 5974 factory.go:656] Stopping watch factory\\\\nI1211 09:54:21.912315 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:21.912330 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607269 6169 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vtfvl in node crc\\\\nI1211 09:54:24.607274 6169 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl after 0 failed attempt(s)\\\\nI1211 09:54:24.607278 6169 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607290 6169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1211 09:54:24.607328 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nF1211 09:54:24.607339 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.217669 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:25 crc kubenswrapper[4746]: E1211 09:54:25.217776 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:25 crc kubenswrapper[4746]: E1211 09:54:25.217835 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs podName:2a55e871-062f-43fd-a1e2-b2296474f4f3 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:26.217818872 +0000 UTC m=+39.077682185 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs") pod "network-metrics-daemon-xh6zv" (UID: "2a55e871-062f-43fd-a1e2-b2296474f4f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.226451 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.237189 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.248215 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.260817 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.273284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.273339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.273350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.273367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.273377 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:25Z","lastTransitionTime":"2025-12-11T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.273816 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.284863 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:25Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.376013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.376115 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.376126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.376143 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.376155 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:25Z","lastTransitionTime":"2025-12-11T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.478420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.478467 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.478484 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.478501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.478511 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:25Z","lastTransitionTime":"2025-12-11T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.581022 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.581096 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.581106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.581121 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.581129 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:25Z","lastTransitionTime":"2025-12-11T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.629575 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:25 crc kubenswrapper[4746]: E1211 09:54:25.629824 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.683657 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.683704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.683716 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.683732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.683743 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:25Z","lastTransitionTime":"2025-12-11T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.786532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.786570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.786581 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.786597 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.786608 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:25Z","lastTransitionTime":"2025-12-11T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.883116 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/1.log" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.888964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.889079 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.889107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.889135 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.889157 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:25Z","lastTransitionTime":"2025-12-11T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.992487 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.992548 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.992563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.992585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:25 crc kubenswrapper[4746]: I1211 09:54:25.992600 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:25Z","lastTransitionTime":"2025-12-11T09:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.095452 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.095496 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.095505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.095521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.095531 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:26Z","lastTransitionTime":"2025-12-11T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.199024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.199088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.199098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.199113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.199123 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:26Z","lastTransitionTime":"2025-12-11T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.226513 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:26 crc kubenswrapper[4746]: E1211 09:54:26.226646 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:26 crc kubenswrapper[4746]: E1211 09:54:26.226712 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs podName:2a55e871-062f-43fd-a1e2-b2296474f4f3 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:28.226696771 +0000 UTC m=+41.086560094 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs") pod "network-metrics-daemon-xh6zv" (UID: "2a55e871-062f-43fd-a1e2-b2296474f4f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.254726 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.267325 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.285536 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.301598 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.301633 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.301645 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.301662 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.301675 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:26Z","lastTransitionTime":"2025-12-11T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.307219 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.318701 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.330292 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.343344 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.377333 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.403689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.403731 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.403744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.403760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.403771 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:26Z","lastTransitionTime":"2025-12-11T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.412762 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f3906acabf348330f94729d30a516238985eb4915627c1266c73bdd0f401c26d\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1211 09:54:21.912134 5974 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1211 09:54:21.912152 5974 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1211 09:54:21.912163 5974 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1211 09:54:21.912181 5974 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1211 09:54:21.912193 5974 handler.go:208] Removed *v1.Node event handler 7\\\\nI1211 09:54:21.912205 5974 handler.go:208] Removed *v1.Node event handler 2\\\\nI1211 09:54:21.912204 5974 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1211 09:54:21.912196 5974 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1211 09:54:21.912228 5974 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1211 09:54:21.912228 5974 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1211 09:54:21.912236 5974 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1211 09:54:21.912282 5974 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1211 09:54:21.912305 5974 factory.go:656] Stopping watch factory\\\\nI1211 09:54:21.912315 5974 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:21.912330 5974 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1211 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607269 6169 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vtfvl in node crc\\\\nI1211 09:54:24.607274 6169 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl after 0 failed attempt(s)\\\\nI1211 09:54:24.607278 6169 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607290 6169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1211 09:54:24.607328 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nF1211 09:54:24.607339 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.424382 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.434661 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.444299 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.453607 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.461773 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.472468 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.485298 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.501926 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.505194 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.505236 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.505244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.505259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.505267 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:26Z","lastTransitionTime":"2025-12-11T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.588777 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.589965 4746 scope.go:117] "RemoveContainer" containerID="ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa" Dec 11 09:54:26 crc kubenswrapper[4746]: E1211 09:54:26.590294 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.606940 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.606979 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.606989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.607004 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.607024 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:26Z","lastTransitionTime":"2025-12-11T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.608957 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.625147 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.630390 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.630401 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:26 crc kubenswrapper[4746]: E1211 09:54:26.630648 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.630469 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:26 crc kubenswrapper[4746]: E1211 09:54:26.630945 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:26 crc kubenswrapper[4746]: E1211 09:54:26.630810 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.640063 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.652452 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.665432 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.679971 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.698602 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.713909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.713949 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.713962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.713981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.713994 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:26Z","lastTransitionTime":"2025-12-11T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.722030 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.732965 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.750716 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607269 6169 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vtfvl in node crc\\\\nI1211 09:54:24.607274 6169 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl after 0 failed attempt(s)\\\\nI1211 09:54:24.607278 6169 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607290 6169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1211 09:54:24.607328 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nF1211 09:54:24.607339 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.764957 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.776082 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.786655 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.801696 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.816574 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.816655 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.816985 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.817021 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.817040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.817071 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:26Z","lastTransitionTime":"2025-12-11T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.828781 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:26Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.919334 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.919393 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.919409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.919428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:26 crc kubenswrapper[4746]: I1211 09:54:26.919440 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:26Z","lastTransitionTime":"2025-12-11T09:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.022457 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.022820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.022862 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.022880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.022889 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:27Z","lastTransitionTime":"2025-12-11T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.126015 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.126092 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.126105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.126125 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.126137 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:27Z","lastTransitionTime":"2025-12-11T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.228859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.228908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.228924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.228944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.228960 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:27Z","lastTransitionTime":"2025-12-11T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.330894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.330935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.330947 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.330964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.330974 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:27Z","lastTransitionTime":"2025-12-11T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.432993 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.433032 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.433040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.433076 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.433086 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:27Z","lastTransitionTime":"2025-12-11T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.535789 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.535825 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.535834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.535849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.535859 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:27Z","lastTransitionTime":"2025-12-11T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.630339 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:27 crc kubenswrapper[4746]: E1211 09:54:27.630498 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.638414 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.638472 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.638492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.638515 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.638532 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:27Z","lastTransitionTime":"2025-12-11T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.647159 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.666896 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.683401 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.746174 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.746205 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.746215 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.746228 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.746238 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:27Z","lastTransitionTime":"2025-12-11T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.746405 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607269 6169 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vtfvl in node crc\\\\nI1211 09:54:24.607274 6169 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl after 0 failed attempt(s)\\\\nI1211 09:54:24.607278 6169 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607290 6169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1211 09:54:24.607328 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nF1211 09:54:24.607339 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.761122 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.771803 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.783520 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.797322 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.806625 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.819006 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.830078 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.841865 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.848760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.848915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.848940 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.848958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.848972 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:27Z","lastTransitionTime":"2025-12-11T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.853602 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.865108 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.875767 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.885867 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:27Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.950875 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.950918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.950931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.950947 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:27 crc kubenswrapper[4746]: I1211 09:54:27.950957 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:27Z","lastTransitionTime":"2025-12-11T09:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.053554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.053591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.053603 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.053618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.053629 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:28Z","lastTransitionTime":"2025-12-11T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.156166 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.156223 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.156233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.156248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.156256 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:28Z","lastTransitionTime":"2025-12-11T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.250001 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:28 crc kubenswrapper[4746]: E1211 09:54:28.250197 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:28 crc kubenswrapper[4746]: E1211 09:54:28.250298 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs podName:2a55e871-062f-43fd-a1e2-b2296474f4f3 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:32.250278955 +0000 UTC m=+45.110142268 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs") pod "network-metrics-daemon-xh6zv" (UID: "2a55e871-062f-43fd-a1e2-b2296474f4f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.258186 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.258359 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.258430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.258544 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.258619 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:28Z","lastTransitionTime":"2025-12-11T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.361242 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.361282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.361295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.361310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.361321 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:28Z","lastTransitionTime":"2025-12-11T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.463663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.463704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.463729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.463745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.463754 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:28Z","lastTransitionTime":"2025-12-11T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.566710 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.566810 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.566830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.567010 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.567023 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:28Z","lastTransitionTime":"2025-12-11T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.629574 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.629663 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:28 crc kubenswrapper[4746]: E1211 09:54:28.629742 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.630072 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:28 crc kubenswrapper[4746]: E1211 09:54:28.630121 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:28 crc kubenswrapper[4746]: E1211 09:54:28.630177 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.669971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.670028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.670068 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.670087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.670097 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:28Z","lastTransitionTime":"2025-12-11T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.773436 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.773480 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.773488 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.773503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.773513 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:28Z","lastTransitionTime":"2025-12-11T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.876066 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.876105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.876117 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.876163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.876177 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:28Z","lastTransitionTime":"2025-12-11T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.978988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.979066 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.979087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.979107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:28 crc kubenswrapper[4746]: I1211 09:54:28.979124 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:28Z","lastTransitionTime":"2025-12-11T09:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.082143 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.082183 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.082192 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.082209 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.082217 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:29Z","lastTransitionTime":"2025-12-11T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.184741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.184777 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.184788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.184810 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.184821 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:29Z","lastTransitionTime":"2025-12-11T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.287284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.287329 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.287340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.287359 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.287371 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:29Z","lastTransitionTime":"2025-12-11T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.389499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.389538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.389549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.389566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.389579 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:29Z","lastTransitionTime":"2025-12-11T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.492088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.492168 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.492193 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.492223 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.492248 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:29Z","lastTransitionTime":"2025-12-11T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.595271 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.595321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.595343 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.595362 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.595375 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:29Z","lastTransitionTime":"2025-12-11T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.629720 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:29 crc kubenswrapper[4746]: E1211 09:54:29.629889 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.698213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.698266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.698278 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.698295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.698311 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:29Z","lastTransitionTime":"2025-12-11T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.800171 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.800213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.800225 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.800242 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.800254 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:29Z","lastTransitionTime":"2025-12-11T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.902719 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.902754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.902765 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.902782 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:29 crc kubenswrapper[4746]: I1211 09:54:29.902794 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:29Z","lastTransitionTime":"2025-12-11T09:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.006144 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.006183 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.006191 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.006204 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.006214 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:30Z","lastTransitionTime":"2025-12-11T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.109901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.109991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.110016 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.110086 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.110117 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:30Z","lastTransitionTime":"2025-12-11T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.212100 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.212160 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.212174 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.212190 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.212203 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:30Z","lastTransitionTime":"2025-12-11T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.315085 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.315121 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.315129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.315144 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.315153 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:30Z","lastTransitionTime":"2025-12-11T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.418106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.418145 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.418154 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.418167 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.418177 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:30Z","lastTransitionTime":"2025-12-11T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.520723 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.521133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.521264 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.521372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.521480 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:30Z","lastTransitionTime":"2025-12-11T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.624012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.624078 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.624091 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.624109 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.624120 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:30Z","lastTransitionTime":"2025-12-11T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.629793 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.629900 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:30 crc kubenswrapper[4746]: E1211 09:54:30.630131 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.630217 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:30 crc kubenswrapper[4746]: E1211 09:54:30.630440 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:30 crc kubenswrapper[4746]: E1211 09:54:30.630327 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.726586 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.726629 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.726644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.726667 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.726678 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:30Z","lastTransitionTime":"2025-12-11T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.829793 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.829827 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.829837 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.829851 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.829863 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:30Z","lastTransitionTime":"2025-12-11T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.932538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.932576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.932585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.932598 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:30 crc kubenswrapper[4746]: I1211 09:54:30.932608 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:30Z","lastTransitionTime":"2025-12-11T09:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.034897 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.034932 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.034944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.034960 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.034970 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:31Z","lastTransitionTime":"2025-12-11T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.136552 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.136591 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.136600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.136616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.136625 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:31Z","lastTransitionTime":"2025-12-11T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.238757 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.238815 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.238828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.238846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.238856 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:31Z","lastTransitionTime":"2025-12-11T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.341285 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.341325 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.341335 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.341350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.341360 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:31Z","lastTransitionTime":"2025-12-11T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.444140 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.444181 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.444191 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.444206 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.444218 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:31Z","lastTransitionTime":"2025-12-11T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.546736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.546773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.546783 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.546796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.546806 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:31Z","lastTransitionTime":"2025-12-11T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.629409 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:31 crc kubenswrapper[4746]: E1211 09:54:31.629539 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.649507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.649550 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.649561 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.649577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.649609 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:31Z","lastTransitionTime":"2025-12-11T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.751595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.751652 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.751662 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.751676 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.751684 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:31Z","lastTransitionTime":"2025-12-11T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.854418 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.854461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.854470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.854484 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.854493 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:31Z","lastTransitionTime":"2025-12-11T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.956736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.956802 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.956811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.956827 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:31 crc kubenswrapper[4746]: I1211 09:54:31.956837 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:31Z","lastTransitionTime":"2025-12-11T09:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.059839 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.059880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.059893 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.059912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.059925 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:32Z","lastTransitionTime":"2025-12-11T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.161786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.161836 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.161867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.161886 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.161898 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:32Z","lastTransitionTime":"2025-12-11T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.264280 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.264318 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.264330 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.264344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.264355 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:32Z","lastTransitionTime":"2025-12-11T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.291264 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:32 crc kubenswrapper[4746]: E1211 09:54:32.291425 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:32 crc kubenswrapper[4746]: E1211 09:54:32.291501 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs podName:2a55e871-062f-43fd-a1e2-b2296474f4f3 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:40.291478612 +0000 UTC m=+53.151341945 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs") pod "network-metrics-daemon-xh6zv" (UID: "2a55e871-062f-43fd-a1e2-b2296474f4f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.366891 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.366925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.366935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.366949 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.366962 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:32Z","lastTransitionTime":"2025-12-11T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.469269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.469302 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.469310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.469324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.469339 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:32Z","lastTransitionTime":"2025-12-11T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.572142 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.572186 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.572198 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.572215 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.572231 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:32Z","lastTransitionTime":"2025-12-11T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.630152 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.630229 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.630164 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:32 crc kubenswrapper[4746]: E1211 09:54:32.630346 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:32 crc kubenswrapper[4746]: E1211 09:54:32.630474 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:32 crc kubenswrapper[4746]: E1211 09:54:32.630524 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.673811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.673841 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.673853 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.673869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.673881 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:32Z","lastTransitionTime":"2025-12-11T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.776912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.777165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.777184 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.777244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.777261 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:32Z","lastTransitionTime":"2025-12-11T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.879085 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.879130 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.879183 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.879200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.879211 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:32Z","lastTransitionTime":"2025-12-11T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.981293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.981328 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.981338 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.981356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:32 crc kubenswrapper[4746]: I1211 09:54:32.981370 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:32Z","lastTransitionTime":"2025-12-11T09:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.084876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.084938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.084957 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.084981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.085000 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:33Z","lastTransitionTime":"2025-12-11T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.188390 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.188449 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.188464 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.188485 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.188501 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:33Z","lastTransitionTime":"2025-12-11T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.290114 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.290152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.290161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.290175 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.290184 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:33Z","lastTransitionTime":"2025-12-11T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.392212 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.392327 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.392397 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.392423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.392442 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:33Z","lastTransitionTime":"2025-12-11T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.496029 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.496211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.496240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.496270 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.496296 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:33Z","lastTransitionTime":"2025-12-11T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.598887 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.598936 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.598953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.598977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.598995 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:33Z","lastTransitionTime":"2025-12-11T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.630424 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:33 crc kubenswrapper[4746]: E1211 09:54:33.630704 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.702220 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.702262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.702273 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.702290 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.702302 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:33Z","lastTransitionTime":"2025-12-11T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.805768 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.805816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.805827 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.805848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.805859 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:33Z","lastTransitionTime":"2025-12-11T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.908102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.908134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.908143 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.908156 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:33 crc kubenswrapper[4746]: I1211 09:54:33.908164 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:33Z","lastTransitionTime":"2025-12-11T09:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.010946 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.010991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.011012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.011030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.011066 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.116396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.116449 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.116462 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.116481 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.116494 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.218997 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.219059 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.219070 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.219087 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.219095 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.321724 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.321766 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.321778 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.321834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.321848 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.424819 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.424866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.424876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.424890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.424901 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.526988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.527036 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.527081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.527106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.527123 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.629214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.629238 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.629252 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.629263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.629276 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.629286 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: E1211 09:54:34.629337 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.629381 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:34 crc kubenswrapper[4746]: E1211 09:54:34.629419 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.629463 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:34 crc kubenswrapper[4746]: E1211 09:54:34.629501 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.731784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.731867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.731887 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.731915 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.731939 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.834728 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.834774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.834786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.834803 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.834813 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.936270 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.936300 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.936308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.936321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.936330 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.946325 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.946466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.946537 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.946614 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.946675 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: E1211 09:54:34.968281 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:34Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.973347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.973527 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.973648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.973746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.973904 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:34 crc kubenswrapper[4746]: E1211 09:54:34.993906 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:34Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.998455 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.998643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.998706 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.998774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:34 crc kubenswrapper[4746]: I1211 09:54:34.998845 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:34Z","lastTransitionTime":"2025-12-11T09:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: E1211 09:54:35.011373 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:35Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.015808 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.015840 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.015850 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.015866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.015877 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: E1211 09:54:35.030325 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:35Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.033547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.033581 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.033600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.033616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.033628 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: E1211 09:54:35.044558 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:35Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:35 crc kubenswrapper[4746]: E1211 09:54:35.044675 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.046172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.046209 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.046222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.046240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.046252 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.148502 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.148552 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.148566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.148584 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.148594 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.250783 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.250820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.250829 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.250848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.250866 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.353759 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.353794 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.353805 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.353820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.353831 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.457355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.457473 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.457592 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.457629 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.457703 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.559646 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.559677 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.559685 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.559697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.559705 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.629925 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:35 crc kubenswrapper[4746]: E1211 09:54:35.630191 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.662569 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.662619 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.662635 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.662655 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.662670 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.766090 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.766153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.766177 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.766207 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.766229 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.868390 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.868510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.868530 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.868552 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.868571 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.971619 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.971666 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.971684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.971707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:35 crc kubenswrapper[4746]: I1211 09:54:35.971743 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:35Z","lastTransitionTime":"2025-12-11T09:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.075084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.075138 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.075155 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.075182 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.075199 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:36Z","lastTransitionTime":"2025-12-11T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.178307 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.178394 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.178415 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.178440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.178457 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:36Z","lastTransitionTime":"2025-12-11T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.282944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.283020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.283041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.283105 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.283129 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:36Z","lastTransitionTime":"2025-12-11T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.386348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.386377 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.386389 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.386407 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.386421 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:36Z","lastTransitionTime":"2025-12-11T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.489521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.489605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.489626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.489946 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.490289 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:36Z","lastTransitionTime":"2025-12-11T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.593641 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.593697 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.593715 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.593738 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.593756 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:36Z","lastTransitionTime":"2025-12-11T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.630342 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:36 crc kubenswrapper[4746]: E1211 09:54:36.630530 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.630342 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.630376 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:36 crc kubenswrapper[4746]: E1211 09:54:36.630714 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:36 crc kubenswrapper[4746]: E1211 09:54:36.630855 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.696559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.696684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.696749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.696781 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.696799 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:36Z","lastTransitionTime":"2025-12-11T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.799542 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.799586 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.799598 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.799617 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.799629 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:36Z","lastTransitionTime":"2025-12-11T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.902535 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.902588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.902601 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.902620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:36 crc kubenswrapper[4746]: I1211 09:54:36.902634 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:36Z","lastTransitionTime":"2025-12-11T09:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.004848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.004878 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.004889 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.004904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.004916 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:37Z","lastTransitionTime":"2025-12-11T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.107570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.107612 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.107624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.107642 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.107655 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:37Z","lastTransitionTime":"2025-12-11T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.210981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.211118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.211139 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.211165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.211203 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:37Z","lastTransitionTime":"2025-12-11T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.314178 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.314344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.314377 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.314478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.314596 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:37Z","lastTransitionTime":"2025-12-11T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.417785 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.417838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.417855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.417880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.417898 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:37Z","lastTransitionTime":"2025-12-11T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.520863 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.520925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.520942 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.520966 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.520987 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:37Z","lastTransitionTime":"2025-12-11T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.624931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.625000 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.625015 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.625034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.625079 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:37Z","lastTransitionTime":"2025-12-11T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.629357 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:37 crc kubenswrapper[4746]: E1211 09:54:37.629518 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.647250 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.682105 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.702189 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.726352 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607269 6169 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vtfvl in node crc\\\\nI1211 09:54:24.607274 6169 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl after 0 failed attempt(s)\\\\nI1211 09:54:24.607278 6169 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607290 6169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1211 09:54:24.607328 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nF1211 09:54:24.607339 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.737681 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.737731 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.737744 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.737761 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.737775 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:37Z","lastTransitionTime":"2025-12-11T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.749691 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.760873 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.773384 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.787406 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.801494 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.817401 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.832038 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.840506 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.840538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.840547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.840561 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.840570 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:37Z","lastTransitionTime":"2025-12-11T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.849317 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.863638 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.883355 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.898261 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.911416 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:37Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.942909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.942955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.942966 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.942982 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:37 crc kubenswrapper[4746]: I1211 09:54:37.942993 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:37Z","lastTransitionTime":"2025-12-11T09:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.046024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.046108 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.046126 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.046150 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.046167 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:38Z","lastTransitionTime":"2025-12-11T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.148864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.148938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.148958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.148982 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.148994 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:38Z","lastTransitionTime":"2025-12-11T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.251971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.252009 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.252019 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.252034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.252065 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:38Z","lastTransitionTime":"2025-12-11T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.353975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.354012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.354021 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.354034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.354054 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:38Z","lastTransitionTime":"2025-12-11T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.456857 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.456907 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.456916 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.456930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.456938 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:38Z","lastTransitionTime":"2025-12-11T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.560571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.560601 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.560610 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.560622 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.560630 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:38Z","lastTransitionTime":"2025-12-11T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.630394 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:38 crc kubenswrapper[4746]: E1211 09:54:38.630546 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.630424 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:38 crc kubenswrapper[4746]: E1211 09:54:38.630629 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.630403 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:38 crc kubenswrapper[4746]: E1211 09:54:38.630691 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.662765 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.663002 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.663085 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.663130 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.663147 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:38Z","lastTransitionTime":"2025-12-11T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.766413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.766890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.766965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.767086 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.767180 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:38Z","lastTransitionTime":"2025-12-11T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.870336 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.870448 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.870472 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.870501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.870523 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:38Z","lastTransitionTime":"2025-12-11T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.974213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.974285 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.974306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.974332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:38 crc kubenswrapper[4746]: I1211 09:54:38.974350 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:38Z","lastTransitionTime":"2025-12-11T09:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.076727 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.076774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.076790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.076811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.076829 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:39Z","lastTransitionTime":"2025-12-11T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.179603 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.179669 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.179686 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.179711 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.179728 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:39Z","lastTransitionTime":"2025-12-11T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.282265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.282331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.282354 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.282385 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.282408 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:39Z","lastTransitionTime":"2025-12-11T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.386151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.386213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.386231 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.386256 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.386272 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:39Z","lastTransitionTime":"2025-12-11T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.489719 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.489797 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.489820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.489850 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.489880 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:39Z","lastTransitionTime":"2025-12-11T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.592829 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.592880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.592890 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.592909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.592923 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:39Z","lastTransitionTime":"2025-12-11T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.629389 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:39 crc kubenswrapper[4746]: E1211 09:54:39.629853 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.630153 4746 scope.go:117] "RemoveContainer" containerID="ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.695241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.695439 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.695451 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.695469 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.695481 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:39Z","lastTransitionTime":"2025-12-11T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.797512 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.797585 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.797604 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.797629 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.797647 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:39Z","lastTransitionTime":"2025-12-11T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.900758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.900796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.900804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.900823 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:39 crc kubenswrapper[4746]: I1211 09:54:39.900831 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:39Z","lastTransitionTime":"2025-12-11T09:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.003412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.003452 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.003463 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.003478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.003489 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:40Z","lastTransitionTime":"2025-12-11T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.106091 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.106133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.106151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.106176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.106195 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:40Z","lastTransitionTime":"2025-12-11T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.208503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.208540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.208555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.208572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.208583 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:40Z","lastTransitionTime":"2025-12-11T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.312866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.312924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.312941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.312964 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.312982 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:40Z","lastTransitionTime":"2025-12-11T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.376523 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.376711 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:55:12.376682229 +0000 UTC m=+85.236545562 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.376829 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.376977 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.377029 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs podName:2a55e871-062f-43fd-a1e2-b2296474f4f3 nodeName:}" failed. No retries permitted until 2025-12-11 09:54:56.377019078 +0000 UTC m=+69.236882401 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs") pod "network-metrics-daemon-xh6zv" (UID: "2a55e871-062f-43fd-a1e2-b2296474f4f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.415624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.415670 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.415683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.415701 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.415714 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:40Z","lastTransitionTime":"2025-12-11T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.477602 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.477674 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.477733 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.477795 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.477912 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.477937 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.477970 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.477984 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.478006 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.478022 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:55:12.477996349 +0000 UTC m=+85.337859692 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.477937 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.478080 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.478089 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.478090 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:55:12.47803427 +0000 UTC m=+85.337897623 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.478121 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 09:55:12.478109392 +0000 UTC m=+85.337972735 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.478142 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 09:55:12.478131923 +0000 UTC m=+85.337995276 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.518496 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.518581 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.518605 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.518648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.518672 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:40Z","lastTransitionTime":"2025-12-11T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.621609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.621658 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.621675 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.621695 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.621710 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:40Z","lastTransitionTime":"2025-12-11T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.629982 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.630080 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.629993 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.630141 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.630210 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:40 crc kubenswrapper[4746]: E1211 09:54:40.630281 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.724483 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.724554 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.724566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.724584 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.724594 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:40Z","lastTransitionTime":"2025-12-11T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.829895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.829941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.829952 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.829970 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.829982 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:40Z","lastTransitionTime":"2025-12-11T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.932379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.932417 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.932428 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.932448 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.932458 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:40Z","lastTransitionTime":"2025-12-11T09:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.935144 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/1.log" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.938068 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6"} Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.938997 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.954411 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:40Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.966588 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:40Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:40 crc kubenswrapper[4746]: I1211 09:54:40.986529 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:40Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.006429 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.036235 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.036291 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.036303 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.036322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.036335 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:41Z","lastTransitionTime":"2025-12-11T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.037618 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.055136 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.072781 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.088455 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.100250 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.112363 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.127281 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.137020 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.147384 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.147427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.147438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.147489 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.147505 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:41Z","lastTransitionTime":"2025-12-11T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.156706 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607269 6169 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vtfvl in node crc\\\\nI1211 09:54:24.607274 6169 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl after 0 failed attempt(s)\\\\nI1211 09:54:24.607278 6169 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607290 6169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1211 09:54:24.607328 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nF1211 09:54:24.607339 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.210003 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.269668 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.269713 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.269725 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.269742 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.269752 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:41Z","lastTransitionTime":"2025-12-11T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.302713 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.332635 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.371620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.372123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.372204 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.372270 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.372368 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:41Z","lastTransitionTime":"2025-12-11T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.474869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.474895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.474903 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.474921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.474937 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:41Z","lastTransitionTime":"2025-12-11T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.577295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.577357 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.577386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.577425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.577448 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:41Z","lastTransitionTime":"2025-12-11T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.629659 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:41 crc kubenswrapper[4746]: E1211 09:54:41.630011 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.680181 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.680218 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.680227 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.680242 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.680252 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:41Z","lastTransitionTime":"2025-12-11T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.783163 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.783208 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.783219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.783237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.783250 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:41Z","lastTransitionTime":"2025-12-11T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.887421 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.887455 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.887465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.887479 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.887490 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:41Z","lastTransitionTime":"2025-12-11T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.931534 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.944581 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.951430 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.972854 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607269 6169 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vtfvl in node crc\\\\nI1211 09:54:24.607274 6169 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl after 0 failed attempt(s)\\\\nI1211 09:54:24.607278 6169 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607290 6169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1211 09:54:24.607328 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nF1211 09:54:24.607339 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.987231 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.990526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.990581 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.990597 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.990616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:41 crc kubenswrapper[4746]: I1211 09:54:41.990629 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:41Z","lastTransitionTime":"2025-12-11T09:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.001284 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:41Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.019243 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.039389 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.053555 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.066340 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.078428 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.093183 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.093267 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.093284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.093309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.093326 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:42Z","lastTransitionTime":"2025-12-11T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.093896 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.110672 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.129285 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.141255 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.155933 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.168736 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.190116 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.195884 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.195955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.195973 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.195992 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.196019 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:42Z","lastTransitionTime":"2025-12-11T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.299471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.299528 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.299545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.299568 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.299583 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:42Z","lastTransitionTime":"2025-12-11T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.402894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.402965 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.402991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.403025 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.403083 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:42Z","lastTransitionTime":"2025-12-11T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.506008 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.506106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.506124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.506150 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.506167 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:42Z","lastTransitionTime":"2025-12-11T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.609330 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.609381 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.609392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.609407 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.609418 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:42Z","lastTransitionTime":"2025-12-11T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.629595 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.629699 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.629617 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:42 crc kubenswrapper[4746]: E1211 09:54:42.629780 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:42 crc kubenswrapper[4746]: E1211 09:54:42.629899 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:42 crc kubenswrapper[4746]: E1211 09:54:42.630018 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.712378 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.712417 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.712424 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.712440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.712449 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:42Z","lastTransitionTime":"2025-12-11T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.815118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.815188 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.815200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.815217 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.815233 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:42Z","lastTransitionTime":"2025-12-11T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.917200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.917466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.917551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.917618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.917684 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:42Z","lastTransitionTime":"2025-12-11T09:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.946324 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/2.log" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.946755 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/1.log" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.948855 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6" exitCode=1 Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.948964 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6"} Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.949076 4746 scope.go:117] "RemoveContainer" containerID="ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.950058 4746 scope.go:117] "RemoveContainer" containerID="295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6" Dec 11 09:54:42 crc kubenswrapper[4746]: E1211 09:54:42.950199 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.963920 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.974707 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f86bd9b-2d54-4712-ad26-0a49e1b146dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffb65884bd374a1f68af73bd917b4b030ca7c890d5dbdeace452d63bcfc8d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d497e87b108ba839d40a853ccdf1c277759bd4b987852da432b7f876d1ce1890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a1e1264ef536cc9003210b7d9fc47faa3d6b2b80bf328fb0fafddb91e1ad5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.986365 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:42 crc kubenswrapper[4746]: I1211 09:54:42.997475 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:42Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.009650 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.020383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.020478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.020544 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.020626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.020688 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:43Z","lastTransitionTime":"2025-12-11T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.029695 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.049913 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.080022 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.095908 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.115276 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.122653 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.122704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.122717 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.122736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.122747 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:43Z","lastTransitionTime":"2025-12-11T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.128677 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.142173 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.153648 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.172548 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607269 6169 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vtfvl in node crc\\\\nI1211 09:54:24.607274 6169 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl after 0 failed attempt(s)\\\\nI1211 09:54:24.607278 6169 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607290 6169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1211 09:54:24.607328 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nF1211 09:54:24.607339 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"message\\\":\\\"utate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 09:54:41.930797 6374 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1211 09:54:41.932734 6374 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:41.932775 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 09:54:41.932842 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.187800 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.197210 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.206633 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:43Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.225322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.225355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.225365 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.225377 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.225385 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:43Z","lastTransitionTime":"2025-12-11T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.328295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.328362 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.328374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.328391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.328402 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:43Z","lastTransitionTime":"2025-12-11T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.431208 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.431267 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.431284 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.431308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.431326 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:43Z","lastTransitionTime":"2025-12-11T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.534085 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.534151 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.534168 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.534189 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.534204 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:43Z","lastTransitionTime":"2025-12-11T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.630220 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:43 crc kubenswrapper[4746]: E1211 09:54:43.630423 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.636347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.636389 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.636403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.636422 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.636434 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:43Z","lastTransitionTime":"2025-12-11T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.739410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.739495 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.739521 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.739553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.739578 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:43Z","lastTransitionTime":"2025-12-11T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.841981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.842026 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.842038 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.842103 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.842116 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:43Z","lastTransitionTime":"2025-12-11T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.944958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.945085 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.945106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.945132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.945151 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:43Z","lastTransitionTime":"2025-12-11T09:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:43 crc kubenswrapper[4746]: I1211 09:54:43.954844 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/2.log" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.048460 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.048505 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.048516 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.048531 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.048543 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:44Z","lastTransitionTime":"2025-12-11T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.150449 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.150515 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.150531 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.150557 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.150575 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:44Z","lastTransitionTime":"2025-12-11T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.253577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.253656 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.253682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.253712 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.253734 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:44Z","lastTransitionTime":"2025-12-11T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.356020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.356122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.356139 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.356159 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.356173 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:44Z","lastTransitionTime":"2025-12-11T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.457869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.457981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.457998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.458018 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.458028 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:44Z","lastTransitionTime":"2025-12-11T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.560822 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.560857 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.560868 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.560883 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.560895 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:44Z","lastTransitionTime":"2025-12-11T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.630197 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.630195 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.630353 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:44 crc kubenswrapper[4746]: E1211 09:54:44.630518 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:44 crc kubenswrapper[4746]: E1211 09:54:44.630652 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:44 crc kubenswrapper[4746]: E1211 09:54:44.630869 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.663494 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.663570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.663607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.663643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.663667 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:44Z","lastTransitionTime":"2025-12-11T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.766406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.766454 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.766466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.766482 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.766492 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:44Z","lastTransitionTime":"2025-12-11T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.868528 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.868563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.868572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.868588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.868602 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:44Z","lastTransitionTime":"2025-12-11T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.972095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.972147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.972157 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.972178 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:44 crc kubenswrapper[4746]: I1211 09:54:44.972193 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:44Z","lastTransitionTime":"2025-12-11T09:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.074753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.074819 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.074829 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.074841 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.074850 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.176806 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.176850 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.176864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.176883 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.176898 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.279957 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.280003 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.280015 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.280033 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.280075 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.382576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.382624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.382641 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.382659 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.382674 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.442333 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.442372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.442382 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.442398 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.442406 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: E1211 09:54:45.458601 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:45Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.462721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.462745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.462753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.462765 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.462773 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: E1211 09:54:45.478504 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:45Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.482310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.482335 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.482344 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.482369 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.482377 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: E1211 09:54:45.494515 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:45Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.498970 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.499017 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.499067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.499095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.499111 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: E1211 09:54:45.514274 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:45Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.518147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.518187 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.518199 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.518214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.518225 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: E1211 09:54:45.532248 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:45Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:45 crc kubenswrapper[4746]: E1211 09:54:45.532418 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.534204 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.534255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.534275 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.534298 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.534315 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.630267 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:45 crc kubenswrapper[4746]: E1211 09:54:45.630446 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.636350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.636409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.636430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.636456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.636478 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.739980 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.740110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.740142 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.740175 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.740198 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.843037 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.843103 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.843118 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.843137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.843154 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.949490 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.949560 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.949583 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.949625 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:45 crc kubenswrapper[4746]: I1211 09:54:45.949660 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:45Z","lastTransitionTime":"2025-12-11T09:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.052976 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.053015 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.053024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.053040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.053073 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:46Z","lastTransitionTime":"2025-12-11T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.155950 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.156011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.156030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.156082 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.156104 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:46Z","lastTransitionTime":"2025-12-11T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.259573 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.260001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.260221 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.260473 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.260676 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:46Z","lastTransitionTime":"2025-12-11T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.363586 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.363647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.363663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.363683 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.363698 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:46Z","lastTransitionTime":"2025-12-11T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.467262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.467324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.467360 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.467393 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.467416 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:46Z","lastTransitionTime":"2025-12-11T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.570553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.570632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.570652 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.570675 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.570693 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:46Z","lastTransitionTime":"2025-12-11T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.630163 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.630166 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:46 crc kubenswrapper[4746]: E1211 09:54:46.630415 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:46 crc kubenswrapper[4746]: E1211 09:54:46.630573 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.630200 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:46 crc kubenswrapper[4746]: E1211 09:54:46.630724 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.673512 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.673592 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.673626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.673657 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.673677 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:46Z","lastTransitionTime":"2025-12-11T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.776472 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.776539 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.776558 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.776583 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.776600 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:46Z","lastTransitionTime":"2025-12-11T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.879962 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.880027 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.880094 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.880133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.880153 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:46Z","lastTransitionTime":"2025-12-11T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.982684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.982748 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.982765 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.982784 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:46 crc kubenswrapper[4746]: I1211 09:54:46.982797 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:46Z","lastTransitionTime":"2025-12-11T09:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.086080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.086152 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.086171 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.086199 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.086217 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:47Z","lastTransitionTime":"2025-12-11T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.188575 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.188663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.188687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.188721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.188745 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:47Z","lastTransitionTime":"2025-12-11T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.291602 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.291669 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.291696 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.291727 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.291751 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:47Z","lastTransitionTime":"2025-12-11T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.395675 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.395723 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.395741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.395765 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.395782 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:47Z","lastTransitionTime":"2025-12-11T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.498523 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.498608 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.498632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.498663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.498686 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:47Z","lastTransitionTime":"2025-12-11T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.602433 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.602506 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.602529 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.602560 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.602583 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:47Z","lastTransitionTime":"2025-12-11T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.630550 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:47 crc kubenswrapper[4746]: E1211 09:54:47.631035 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.655617 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.677608 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f86bd9b-2d54-4712-ad26-0a49e1b146dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffb65884bd374a1f68af73bd917b4b030ca7c890d5dbdeace452d63bcfc8d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d497e87b108ba839d40a853ccdf1c277759bd4b987852da432b7f876d1ce1890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a1e1264ef536cc9003210b7d9fc47faa3d6b2b80bf328fb0fafddb91e1ad5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.693644 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.705376 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.705649 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.705770 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.705861 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.705941 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:47Z","lastTransitionTime":"2025-12-11T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.714428 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.732620 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.744947 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.758393 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.771118 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.784467 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.797708 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.808774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.808817 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.808826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.808839 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.808849 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:47Z","lastTransitionTime":"2025-12-11T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.813743 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.836971 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.849354 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.869372 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ba2ec818f1eba92980a4c970cddac6ef9177d377227f496f116bc20ce880b4fa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"new object: *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607269 6169 ovn.go:134] Ensuring zone local for Pod openshift-multus/multus-additional-cni-plugins-vtfvl in node crc\\\\nI1211 09:54:24.607274 6169 obj_retry.go:386] Retry successful for *v1.Pod openshift-multus/multus-additional-cni-plugins-vtfvl after 0 failed attempt(s)\\\\nI1211 09:54:24.607278 6169 default_network_controller.go:776] Recording success event on pod openshift-multus/multus-additional-cni-plugins-vtfvl\\\\nI1211 09:54:24.607290 6169 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI1211 09:54:24.607328 6169 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-network-console/networking-console-plugin\\\\\\\"}\\\\nF1211 09:54:24.607339 6169 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occur\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:23Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"message\\\":\\\"utate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 09:54:41.930797 6374 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1211 09:54:41.932734 6374 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:41.932775 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 09:54:41.932842 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.884604 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.894718 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.905364 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:47Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.911165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.911208 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.911221 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.911240 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:47 crc kubenswrapper[4746]: I1211 09:54:47.911253 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:47Z","lastTransitionTime":"2025-12-11T09:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.013503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.013545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.013556 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.013571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.013581 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:48Z","lastTransitionTime":"2025-12-11T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.116941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.116983 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.116999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.117020 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.117033 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:48Z","lastTransitionTime":"2025-12-11T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.220137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.220213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.220241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.220270 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.220291 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:48Z","lastTransitionTime":"2025-12-11T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.323504 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.323534 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.323542 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.323555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.323564 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:48Z","lastTransitionTime":"2025-12-11T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.426424 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.426488 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.426509 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.426535 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.426556 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:48Z","lastTransitionTime":"2025-12-11T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.529650 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.529678 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.529686 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.529699 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.529707 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:48Z","lastTransitionTime":"2025-12-11T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.629404 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.629617 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:48 crc kubenswrapper[4746]: E1211 09:54:48.629788 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.630648 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:48 crc kubenswrapper[4746]: E1211 09:54:48.630834 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:48 crc kubenswrapper[4746]: E1211 09:54:48.631137 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.634403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.634437 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.634448 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.634474 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.634491 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:48Z","lastTransitionTime":"2025-12-11T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.737001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.737110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.737133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.737162 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.737186 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:48Z","lastTransitionTime":"2025-12-11T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.839520 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.839564 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.839576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.839593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.839633 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:48Z","lastTransitionTime":"2025-12-11T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.942012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.942083 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.942101 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.942120 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:48 crc kubenswrapper[4746]: I1211 09:54:48.942135 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:48Z","lastTransitionTime":"2025-12-11T09:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.045030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.045095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.045107 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.045123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.045134 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:49Z","lastTransitionTime":"2025-12-11T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.147633 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.147678 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.147690 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.147706 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.147718 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:49Z","lastTransitionTime":"2025-12-11T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.250440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.250510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.250534 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.250567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.250592 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:49Z","lastTransitionTime":"2025-12-11T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.353190 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.353245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.353271 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.353302 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.353326 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:49Z","lastTransitionTime":"2025-12-11T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.456641 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.456707 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.456725 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.456752 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.456772 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:49Z","lastTransitionTime":"2025-12-11T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.559328 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.559407 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.559427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.559452 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.559478 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:49Z","lastTransitionTime":"2025-12-11T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.630141 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:49 crc kubenswrapper[4746]: E1211 09:54:49.630359 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.661774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.661826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.661838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.661859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.661871 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:49Z","lastTransitionTime":"2025-12-11T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.764011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.764091 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.764110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.764135 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.764150 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:49Z","lastTransitionTime":"2025-12-11T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.866811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.866854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.866866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.866883 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.866895 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:49Z","lastTransitionTime":"2025-12-11T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.969535 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.969599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.969615 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.969641 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:49 crc kubenswrapper[4746]: I1211 09:54:49.969659 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:49Z","lastTransitionTime":"2025-12-11T09:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.072471 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.072538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.072548 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.072562 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.072571 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:50Z","lastTransitionTime":"2025-12-11T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.175640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.175703 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.175720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.175743 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.175760 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:50Z","lastTransitionTime":"2025-12-11T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.281372 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.281486 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.281510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.281536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.281571 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:50Z","lastTransitionTime":"2025-12-11T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.385230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.385306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.385348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.385384 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.385408 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:50Z","lastTransitionTime":"2025-12-11T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.489404 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.489464 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.489503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.489534 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.489553 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:50Z","lastTransitionTime":"2025-12-11T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.593533 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.593578 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.593590 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.593607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.593620 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:50Z","lastTransitionTime":"2025-12-11T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.630268 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.630349 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:50 crc kubenswrapper[4746]: E1211 09:54:50.630421 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.630372 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:50 crc kubenswrapper[4746]: E1211 09:54:50.630573 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:50 crc kubenswrapper[4746]: E1211 09:54:50.630700 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.696440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.696518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.696543 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.696572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.696595 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:50Z","lastTransitionTime":"2025-12-11T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.800037 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.800124 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.800161 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.800201 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.800224 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:50Z","lastTransitionTime":"2025-12-11T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.903347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.903391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.903402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.903422 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:50 crc kubenswrapper[4746]: I1211 09:54:50.903439 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:50Z","lastTransitionTime":"2025-12-11T09:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.006404 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.006482 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.006502 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.006528 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.006543 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:51Z","lastTransitionTime":"2025-12-11T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.109901 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.109989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.110005 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.110027 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.110104 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:51Z","lastTransitionTime":"2025-12-11T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.213189 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.213222 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.213232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.213246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.213255 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:51Z","lastTransitionTime":"2025-12-11T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.315007 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.315084 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.315101 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.315123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.315140 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:51Z","lastTransitionTime":"2025-12-11T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.417208 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.417255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.417272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.417292 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.417307 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:51Z","lastTransitionTime":"2025-12-11T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.519721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.519811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.519841 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.519874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.519897 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:51Z","lastTransitionTime":"2025-12-11T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.622910 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.622954 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.622967 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.622982 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.622992 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:51Z","lastTransitionTime":"2025-12-11T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.630303 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:51 crc kubenswrapper[4746]: E1211 09:54:51.630406 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.725169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.725209 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.725224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.725244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.725260 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:51Z","lastTransitionTime":"2025-12-11T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.827693 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.827730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.827741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.827758 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.827773 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:51Z","lastTransitionTime":"2025-12-11T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.930355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.930386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.930396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.930411 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:51 crc kubenswrapper[4746]: I1211 09:54:51.930422 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:51Z","lastTransitionTime":"2025-12-11T09:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.032848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.032888 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.032897 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.032912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.032921 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:52Z","lastTransitionTime":"2025-12-11T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.135682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.135720 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.135730 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.135745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.135756 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:52Z","lastTransitionTime":"2025-12-11T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.238002 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.238036 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.238080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.238097 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.238107 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:52Z","lastTransitionTime":"2025-12-11T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.340220 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.340257 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.340268 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.340282 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.340296 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:52Z","lastTransitionTime":"2025-12-11T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.442501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.442537 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.442545 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.442559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.442570 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:52Z","lastTransitionTime":"2025-12-11T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.544924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.544953 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.544961 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.544975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.544983 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:52Z","lastTransitionTime":"2025-12-11T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.629959 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.630004 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:52 crc kubenswrapper[4746]: E1211 09:54:52.630096 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.630038 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:52 crc kubenswrapper[4746]: E1211 09:54:52.630239 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:52 crc kubenswrapper[4746]: E1211 09:54:52.630354 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.647438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.647483 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.647494 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.647512 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.647525 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:52Z","lastTransitionTime":"2025-12-11T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.750357 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.750409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.750423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.750456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.750469 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:52Z","lastTransitionTime":"2025-12-11T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.852330 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.852376 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.852385 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.852417 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.852429 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:52Z","lastTransitionTime":"2025-12-11T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.954325 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.954363 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.954374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.954390 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:52 crc kubenswrapper[4746]: I1211 09:54:52.954400 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:52Z","lastTransitionTime":"2025-12-11T09:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.056741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.056779 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.056787 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.056804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.056812 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:53Z","lastTransitionTime":"2025-12-11T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.171283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.171342 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.171361 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.171386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.171402 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:53Z","lastTransitionTime":"2025-12-11T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.274352 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.274387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.274396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.274409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.274418 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:53Z","lastTransitionTime":"2025-12-11T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.376195 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.376230 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.376242 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.376258 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.376270 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:53Z","lastTransitionTime":"2025-12-11T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.478326 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.478364 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.478374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.478388 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.478398 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:53Z","lastTransitionTime":"2025-12-11T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.580851 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.580900 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.580912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.580933 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.580948 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:53Z","lastTransitionTime":"2025-12-11T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.630387 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:53 crc kubenswrapper[4746]: E1211 09:54:53.630519 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.683762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.683816 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.683827 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.683844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.683857 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:53Z","lastTransitionTime":"2025-12-11T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.786211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.786482 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.786542 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.786610 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.786680 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:53Z","lastTransitionTime":"2025-12-11T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.888995 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.889248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.889315 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.889379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.889465 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:53Z","lastTransitionTime":"2025-12-11T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.991613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.991665 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.991679 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.991700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:53 crc kubenswrapper[4746]: I1211 09:54:53.991715 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:53Z","lastTransitionTime":"2025-12-11T09:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.094396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.094436 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.094461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.094475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.094484 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:54Z","lastTransitionTime":"2025-12-11T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.197180 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.197220 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.197229 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.197244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.197252 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:54Z","lastTransitionTime":"2025-12-11T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.299379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.299632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.299774 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.299861 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.299928 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:54Z","lastTransitionTime":"2025-12-11T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.402269 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.402303 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.402320 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.402334 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.402342 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:54Z","lastTransitionTime":"2025-12-11T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.504482 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.504526 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.504537 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.504553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.504567 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:54Z","lastTransitionTime":"2025-12-11T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.606650 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.606685 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.606696 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.606709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.606719 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:54Z","lastTransitionTime":"2025-12-11T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.629747 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.629779 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:54 crc kubenswrapper[4746]: E1211 09:54:54.629905 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.630014 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:54 crc kubenswrapper[4746]: E1211 09:54:54.630099 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:54 crc kubenswrapper[4746]: E1211 09:54:54.630324 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.709570 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.709617 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.709627 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.709640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.709649 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:54Z","lastTransitionTime":"2025-12-11T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.811496 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.811536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.811548 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.811563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.811575 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:54Z","lastTransitionTime":"2025-12-11T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.913857 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.913897 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.913913 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.913934 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:54 crc kubenswrapper[4746]: I1211 09:54:54.913949 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:54Z","lastTransitionTime":"2025-12-11T09:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.016244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.016519 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.016582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.016665 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.016730 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.118658 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.118903 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.119023 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.119137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.119214 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.221616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.221660 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.221671 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.221688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.221702 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.324263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.324319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.324332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.324355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.324369 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.427753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.427796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.427809 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.427828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.427841 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.530908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.530958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.530971 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.531001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.531012 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.629730 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:55 crc kubenswrapper[4746]: E1211 09:54:55.629929 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.633746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.633780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.633791 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.633805 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.633816 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.735606 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.735660 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.735672 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.735690 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.735706 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.838363 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.838411 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.838425 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.838444 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.838459 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.856398 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.856446 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.856459 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.856475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.856487 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: E1211 09:54:55.873980 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:55Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.878811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.878848 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.878858 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.878873 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.878882 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: E1211 09:54:55.891332 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:55Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.895061 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.895089 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.895098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.895112 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.895148 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: E1211 09:54:55.913042 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:55Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.916782 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.916811 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.916819 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.916833 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.916842 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: E1211 09:54:55.929449 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:55Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.932694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.932802 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.932867 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.932938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.933022 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:55 crc kubenswrapper[4746]: E1211 09:54:55.946394 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:55Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:55 crc kubenswrapper[4746]: E1211 09:54:55.946721 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.948401 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.948494 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.948555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.948621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:55 crc kubenswrapper[4746]: I1211 09:54:55.948684 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:55Z","lastTransitionTime":"2025-12-11T09:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.051689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.051754 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.051771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.051794 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.051811 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:56Z","lastTransitionTime":"2025-12-11T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.154648 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.154682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.154694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.154709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.154720 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:56Z","lastTransitionTime":"2025-12-11T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.256586 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.256617 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.256626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.256639 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.256648 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:56Z","lastTransitionTime":"2025-12-11T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.359021 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.359132 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.359160 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.359187 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.359209 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:56Z","lastTransitionTime":"2025-12-11T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.418178 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:56 crc kubenswrapper[4746]: E1211 09:54:56.418293 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:56 crc kubenswrapper[4746]: E1211 09:54:56.418338 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs podName:2a55e871-062f-43fd-a1e2-b2296474f4f3 nodeName:}" failed. No retries permitted until 2025-12-11 09:55:28.418323616 +0000 UTC m=+101.278186929 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs") pod "network-metrics-daemon-xh6zv" (UID: "2a55e871-062f-43fd-a1e2-b2296474f4f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.461796 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.461841 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.461852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.461868 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.461878 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:56Z","lastTransitionTime":"2025-12-11T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.564402 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.564464 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.564475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.564490 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.564527 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:56Z","lastTransitionTime":"2025-12-11T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.629648 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.629698 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.629663 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:56 crc kubenswrapper[4746]: E1211 09:54:56.629756 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:56 crc kubenswrapper[4746]: E1211 09:54:56.629865 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:56 crc kubenswrapper[4746]: E1211 09:54:56.629959 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.630455 4746 scope.go:117] "RemoveContainer" containerID="295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6" Dec 11 09:54:56 crc kubenswrapper[4746]: E1211 09:54:56.630720 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.644280 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.666894 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.666940 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.666951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.666967 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.666978 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:56Z","lastTransitionTime":"2025-12-11T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.674975 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"message\\\":\\\"utate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 09:54:41.930797 6374 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1211 09:54:41.932734 6374 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:41.932775 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 09:54:41.932842 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.687871 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.699380 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.710782 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.725310 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.737516 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f86bd9b-2d54-4712-ad26-0a49e1b146dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffb65884bd374a1f68af73bd917b4b030ca7c890d5dbdeace452d63bcfc8d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d497e87b108ba839d40a853ccdf1c277759bd4b987852da432b7f876d1ce1890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a1e1264ef536cc9003210b7d9fc47faa3d6b2b80bf328fb0fafddb91e1ad5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.751900 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.763251 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.768698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.768729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.768739 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.768753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.768764 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:56Z","lastTransitionTime":"2025-12-11T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.774799 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.785227 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.796247 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.805886 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.814253 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.826365 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.839748 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.855301 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:56Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.870948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.871001 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.871013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.871031 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.871057 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:56Z","lastTransitionTime":"2025-12-11T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.973426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.973465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.973473 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.973485 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:56 crc kubenswrapper[4746]: I1211 09:54:56.973493 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:56Z","lastTransitionTime":"2025-12-11T09:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.076547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.076597 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.076608 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.076625 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.076638 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:57Z","lastTransitionTime":"2025-12-11T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.178501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.178573 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.178582 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.178616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.178626 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:57Z","lastTransitionTime":"2025-12-11T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.281249 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.281312 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.281333 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.281358 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.281375 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:57Z","lastTransitionTime":"2025-12-11T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.384640 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.384864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.384914 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.384945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.384967 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:57Z","lastTransitionTime":"2025-12-11T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.487581 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.487609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.487619 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.487634 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.487644 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:57Z","lastTransitionTime":"2025-12-11T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.589834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.589864 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.589873 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.589888 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.589897 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:57Z","lastTransitionTime":"2025-12-11T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.630440 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:57 crc kubenswrapper[4746]: E1211 09:54:57.630592 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.647225 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.659297 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.671512 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f86bd9b-2d54-4712-ad26-0a49e1b146dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffb65884bd374a1f68af73bd917b4b030ca7c890d5dbdeace452d63bcfc8d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d497e87b108ba839d40a853ccdf1c277759bd4b987852da432b7f876d1ce1890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a1e1264ef536cc9003210b7d9fc47faa3d6b2b80bf328fb0fafddb91e1ad5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.683884 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.691362 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.691386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.691395 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.691409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.691418 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:57Z","lastTransitionTime":"2025-12-11T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.695659 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.709123 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.721085 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.735742 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.753665 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.766491 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.776830 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.793389 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.794274 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.794321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.794333 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.794352 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.794365 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:57Z","lastTransitionTime":"2025-12-11T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.805100 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.815263 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.828335 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.837890 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.860019 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"message\\\":\\\"utate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 09:54:41.930797 6374 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1211 09:54:41.932734 6374 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:41.932775 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 09:54:41.932842 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:54:57Z is after 2025-08-24T17:21:41Z" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.896277 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.896309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.896317 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.896330 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.896338 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:57Z","lastTransitionTime":"2025-12-11T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.998787 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.998854 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.998878 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.998908 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:57 crc kubenswrapper[4746]: I1211 09:54:57.998933 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:57Z","lastTransitionTime":"2025-12-11T09:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.101555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.101595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.101607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.101623 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.101636 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:58Z","lastTransitionTime":"2025-12-11T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.203465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.203594 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.203609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.203625 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.203635 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:58Z","lastTransitionTime":"2025-12-11T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.305535 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.305600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.305613 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.305632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.305644 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:58Z","lastTransitionTime":"2025-12-11T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.408112 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.408147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.408155 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.408169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.408178 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:58Z","lastTransitionTime":"2025-12-11T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.510709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.510745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.510756 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.510773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.510784 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:58Z","lastTransitionTime":"2025-12-11T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.613843 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.613895 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.613911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.613934 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.613949 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:58Z","lastTransitionTime":"2025-12-11T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.630176 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:54:58 crc kubenswrapper[4746]: E1211 09:54:58.630278 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.630420 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:54:58 crc kubenswrapper[4746]: E1211 09:54:58.630483 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.630605 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:54:58 crc kubenswrapper[4746]: E1211 09:54:58.630660 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.716419 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.716459 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.716470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.716485 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.716496 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:58Z","lastTransitionTime":"2025-12-11T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.818257 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.818306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.818324 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.818340 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.818350 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:58Z","lastTransitionTime":"2025-12-11T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.920176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.920217 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.920229 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.920249 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:58 crc kubenswrapper[4746]: I1211 09:54:58.920261 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:58Z","lastTransitionTime":"2025-12-11T09:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.022876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.022930 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.022939 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.022955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.022963 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:59Z","lastTransitionTime":"2025-12-11T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.124943 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.124981 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.124991 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.125012 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.125021 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:59Z","lastTransitionTime":"2025-12-11T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.227062 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.227106 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.227117 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.227135 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.227148 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:59Z","lastTransitionTime":"2025-12-11T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.329245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.329291 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.329303 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.329321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.329334 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:59Z","lastTransitionTime":"2025-12-11T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.435255 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.435393 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.435408 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.435431 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.435448 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:59Z","lastTransitionTime":"2025-12-11T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.537801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.537839 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.537849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.537863 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.537872 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:59Z","lastTransitionTime":"2025-12-11T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.630162 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:54:59 crc kubenswrapper[4746]: E1211 09:54:59.630315 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.640242 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.640271 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.640280 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.640293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.640302 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:59Z","lastTransitionTime":"2025-12-11T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.742704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.742738 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.742745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.742760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.742769 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:59Z","lastTransitionTime":"2025-12-11T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.844631 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.844672 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.844688 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.844708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.844723 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:59Z","lastTransitionTime":"2025-12-11T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.947535 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.947569 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.947578 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.947592 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:54:59 crc kubenswrapper[4746]: I1211 09:54:59.947604 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:54:59Z","lastTransitionTime":"2025-12-11T09:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.007470 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/0.log" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.007528 4746 generic.go:334] "Generic (PLEG): container finished" podID="52ba00d9-b0ef-4496-a6b8-e170f405c592" containerID="0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade" exitCode=1 Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.007560 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r622c" event={"ID":"52ba00d9-b0ef-4496-a6b8-e170f405c592","Type":"ContainerDied","Data":"0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.007943 4746 scope.go:117] "RemoveContainer" containerID="0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.023954 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.047212 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.049547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.049583 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.049593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.049609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.049619 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:00Z","lastTransitionTime":"2025-12-11T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.063933 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.075458 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.088183 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.103992 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.114832 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.134318 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"message\\\":\\\"utate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 09:54:41.930797 6374 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1211 09:54:41.932734 6374 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:41.932775 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 09:54:41.932842 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.149137 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:59Z\\\",\\\"message\\\":\\\"2025-12-11T09:54:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec997725-6a86-46b4-a40f-43a3a4321fe1\\\\n2025-12-11T09:54:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec997725-6a86-46b4-a40f-43a3a4321fe1 to /host/opt/cni/bin/\\\\n2025-12-11T09:54:14Z [verbose] multus-daemon started\\\\n2025-12-11T09:54:14Z [verbose] Readiness Indicator file check\\\\n2025-12-11T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.151600 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.151621 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.151630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.151642 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.151652 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:00Z","lastTransitionTime":"2025-12-11T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.161071 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.176118 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f86bd9b-2d54-4712-ad26-0a49e1b146dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffb65884bd374a1f68af73bd917b4b030ca7c890d5dbdeace452d63bcfc8d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d497e87b108ba839d40a853ccdf1c277759bd4b987852da432b7f876d1ce1890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a1e1264ef536cc9003210b7d9fc47faa3d6b2b80bf328fb0fafddb91e1ad5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.189992 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.204289 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.217935 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.233087 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.249627 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.253462 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.253494 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.253503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.253515 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.253524 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:00Z","lastTransitionTime":"2025-12-11T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.264450 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:00Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.355736 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.355817 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.355829 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.355845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.355857 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:00Z","lastTransitionTime":"2025-12-11T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.458852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.458904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.458916 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.458938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.458951 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:00Z","lastTransitionTime":"2025-12-11T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.562352 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.562406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.562422 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.562444 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.562460 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:00Z","lastTransitionTime":"2025-12-11T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.630325 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.630350 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.630467 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:00 crc kubenswrapper[4746]: E1211 09:55:00.630605 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:00 crc kubenswrapper[4746]: E1211 09:55:00.630939 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:00 crc kubenswrapper[4746]: E1211 09:55:00.631008 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.646117 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.664356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.664400 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.664410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.664427 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.664440 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:00Z","lastTransitionTime":"2025-12-11T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.766349 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.766401 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.766413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.766430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.766441 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:00Z","lastTransitionTime":"2025-12-11T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.868265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.868298 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.868306 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.868319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.868329 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:00Z","lastTransitionTime":"2025-12-11T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.973764 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.973828 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.973846 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.973880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:00 crc kubenswrapper[4746]: I1211 09:55:00.973896 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:00Z","lastTransitionTime":"2025-12-11T09:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.013669 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/0.log" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.013819 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r622c" event={"ID":"52ba00d9-b0ef-4496-a6b8-e170f405c592","Type":"ContainerStarted","Data":"2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.029302 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.041257 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f86bd9b-2d54-4712-ad26-0a49e1b146dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffb65884bd374a1f68af73bd917b4b030ca7c890d5dbdeace452d63bcfc8d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d497e87b108ba839d40a853ccdf1c277759bd4b987852da432b7f876d1ce1890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a1e1264ef536cc9003210b7d9fc47faa3d6b2b80bf328fb0fafddb91e1ad5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.058836 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.073366 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.075874 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.075948 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.075961 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.075977 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.075988 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:01Z","lastTransitionTime":"2025-12-11T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.087140 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.098670 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.109288 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.118186 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.126890 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b33732d-642a-461e-929c-6c69e9b4a3d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319e929376197a4ad59aba9f64c20474d60de480e41f84cc01de5fbc7fa3bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4198aee534e1b263316ac333389124e1416f15b759253ea951e004336f2b86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4198aee534e1b263316ac333389124e1416f15b759253ea951e004336f2b86ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.137106 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.149979 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.163752 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.177820 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.179223 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.179260 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.179272 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.179288 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.179299 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:01Z","lastTransitionTime":"2025-12-11T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.189037 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.214940 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"message\\\":\\\"utate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 09:54:41.930797 6374 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1211 09:54:41.932734 6374 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:41.932775 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 09:54:41.932842 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.241306 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:59Z\\\",\\\"message\\\":\\\"2025-12-11T09:54:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec997725-6a86-46b4-a40f-43a3a4321fe1\\\\n2025-12-11T09:54:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec997725-6a86-46b4-a40f-43a3a4321fe1 to /host/opt/cni/bin/\\\\n2025-12-11T09:54:14Z [verbose] multus-daemon started\\\\n2025-12-11T09:54:14Z [verbose] Readiness Indicator file check\\\\n2025-12-11T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.257812 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.269301 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:01Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.281898 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.281940 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.281952 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.281968 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.281978 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:01Z","lastTransitionTime":"2025-12-11T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.384654 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.384692 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.384701 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.384714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.384722 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:01Z","lastTransitionTime":"2025-12-11T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.486988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.487032 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.487062 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.487080 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.487095 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:01Z","lastTransitionTime":"2025-12-11T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.589641 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.589700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.589717 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.589741 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.589760 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:01Z","lastTransitionTime":"2025-12-11T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.629452 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:01 crc kubenswrapper[4746]: E1211 09:55:01.629821 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.691749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.691801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.691812 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.691834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.691847 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:01Z","lastTransitionTime":"2025-12-11T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.794097 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.794147 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.794158 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.794176 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.794188 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:01Z","lastTransitionTime":"2025-12-11T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.896258 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.896290 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.896299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.896311 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.896320 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:01Z","lastTransitionTime":"2025-12-11T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.998801 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.998845 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.998855 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.998869 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:01 crc kubenswrapper[4746]: I1211 09:55:01.998877 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:01Z","lastTransitionTime":"2025-12-11T09:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.101700 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.101750 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.101769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.101786 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.101797 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:02Z","lastTransitionTime":"2025-12-11T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.204271 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.204322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.204334 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.204355 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.204371 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:02Z","lastTransitionTime":"2025-12-11T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.306530 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.306593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.306610 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.306633 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.306649 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:02Z","lastTransitionTime":"2025-12-11T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.409165 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.409212 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.409227 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.409245 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.409259 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:02Z","lastTransitionTime":"2025-12-11T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.511492 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.511538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.511552 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.511572 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.511587 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:02Z","lastTransitionTime":"2025-12-11T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.614860 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.614913 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.614925 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.614943 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.614954 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:02Z","lastTransitionTime":"2025-12-11T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.629642 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.629769 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.629546 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:02 crc kubenswrapper[4746]: E1211 09:55:02.630010 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:02 crc kubenswrapper[4746]: E1211 09:55:02.630243 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:02 crc kubenswrapper[4746]: E1211 09:55:02.630377 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.718295 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.718339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.718356 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.718373 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.718390 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:02Z","lastTransitionTime":"2025-12-11T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.821412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.821454 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.821467 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.821483 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.821494 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:02Z","lastTransitionTime":"2025-12-11T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.923493 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.923540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.923550 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.923565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:02 crc kubenswrapper[4746]: I1211 09:55:02.923574 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:02Z","lastTransitionTime":"2025-12-11T09:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.026167 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.026210 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.026219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.026232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.026241 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:03Z","lastTransitionTime":"2025-12-11T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.131206 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.131280 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.131312 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.131343 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.131433 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:03Z","lastTransitionTime":"2025-12-11T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.233679 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.233729 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.233757 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.233780 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.233794 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:03Z","lastTransitionTime":"2025-12-11T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.335571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.335607 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.335620 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.335635 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.335645 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:03Z","lastTransitionTime":"2025-12-11T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.438475 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.438541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.438559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.438589 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.438616 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:03Z","lastTransitionTime":"2025-12-11T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.541670 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.541714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.541732 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.541749 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.542008 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:03Z","lastTransitionTime":"2025-12-11T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.629983 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:03 crc kubenswrapper[4746]: E1211 09:55:03.630183 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.643909 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.643947 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.643957 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.643974 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.643985 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:03Z","lastTransitionTime":"2025-12-11T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.747279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.747350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.747376 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.747405 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.747429 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:03Z","lastTransitionTime":"2025-12-11T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.851134 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.851181 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.851190 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.851213 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.851232 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:03Z","lastTransitionTime":"2025-12-11T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.958259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.958319 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.958339 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.958422 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:03 crc kubenswrapper[4746]: I1211 09:55:03.958462 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:03Z","lastTransitionTime":"2025-12-11T09:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.061296 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.061343 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.061351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.061366 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.061376 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:04Z","lastTransitionTime":"2025-12-11T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.163771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.163847 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.163866 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.163889 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.163908 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:04Z","lastTransitionTime":"2025-12-11T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.267262 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.267293 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.267302 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.267315 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.267323 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:04Z","lastTransitionTime":"2025-12-11T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.369985 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.370066 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.370088 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.370109 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.370125 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:04Z","lastTransitionTime":"2025-12-11T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.472877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.472944 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.472958 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.472975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.472989 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:04Z","lastTransitionTime":"2025-12-11T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.575212 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.575265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.575274 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.575288 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.575297 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:04Z","lastTransitionTime":"2025-12-11T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.629529 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.629622 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:04 crc kubenswrapper[4746]: E1211 09:55:04.629687 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.629867 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:04 crc kubenswrapper[4746]: E1211 09:55:04.629952 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:04 crc kubenswrapper[4746]: E1211 09:55:04.630006 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.677499 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.677536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.677543 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.677556 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.677564 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:04Z","lastTransitionTime":"2025-12-11T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.780644 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.780712 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.780734 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.780762 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.780784 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:04Z","lastTransitionTime":"2025-12-11T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.883353 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.883418 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.883498 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.883538 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.883601 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:04Z","lastTransitionTime":"2025-12-11T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.986541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.986624 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.986646 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.986677 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:04 crc kubenswrapper[4746]: I1211 09:55:04.986699 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:04Z","lastTransitionTime":"2025-12-11T09:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.090246 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.090332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.090365 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.090632 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.090686 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:05Z","lastTransitionTime":"2025-12-11T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.194301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.194364 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.194373 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.194387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.194395 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:05Z","lastTransitionTime":"2025-12-11T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.296771 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.296852 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.296877 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.296912 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.296934 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:05Z","lastTransitionTime":"2025-12-11T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.400337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.400408 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.400455 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.400547 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.400889 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:05Z","lastTransitionTime":"2025-12-11T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.503687 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.503753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.503772 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.503798 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.503816 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:05Z","lastTransitionTime":"2025-12-11T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.624305 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.624364 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.624385 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.624410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.624433 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:05Z","lastTransitionTime":"2025-12-11T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.630240 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:05 crc kubenswrapper[4746]: E1211 09:55:05.630358 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.727482 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.727541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.727559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.727583 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.727601 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:05Z","lastTransitionTime":"2025-12-11T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.830301 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.830341 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.830350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.830366 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.830377 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:05Z","lastTransitionTime":"2025-12-11T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.932655 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.932686 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.932694 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.932708 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:05 crc kubenswrapper[4746]: I1211 09:55:05.932716 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:05Z","lastTransitionTime":"2025-12-11T09:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.021410 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.021470 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.021482 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.021507 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.021586 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: E1211 09:55:06.039914 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:06Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.046095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.046150 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.046167 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.046196 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.046214 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: E1211 09:55:06.064572 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:06Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.069418 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.069627 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.069721 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.069838 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.069934 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: E1211 09:55:06.085075 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:06Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.089702 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.089745 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.089755 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.089769 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.089777 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: E1211 09:55:06.104125 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:06Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.107938 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.108153 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.108279 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.108423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.108545 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: E1211 09:55:06.122759 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"d3edc674-d518-40d2-b40b-af693a175be6\\\",\\\"systemUUID\\\":\\\"5d868abb-9952-4b6b-be6c-e2bc736f8f4d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:06Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:06 crc kubenswrapper[4746]: E1211 09:55:06.122905 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.124376 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.124412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.124423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.124442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.124455 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.227461 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.227788 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.228110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.228328 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.228482 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.331861 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.331924 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.331949 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.331963 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.331972 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.434936 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.434966 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.434975 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.434989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.434998 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.537201 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.537265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.537283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.537308 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.537324 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.630125 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.630209 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.630249 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:06 crc kubenswrapper[4746]: E1211 09:55:06.630395 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:06 crc kubenswrapper[4746]: E1211 09:55:06.630469 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:06 crc kubenswrapper[4746]: E1211 09:55:06.630535 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.640139 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.640172 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.640244 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.640265 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.640314 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.742332 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.742392 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.742414 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.742439 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.742458 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.845951 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.845987 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.845998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.846040 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.846099 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.948525 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.948584 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.948595 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.948611 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:06 crc kubenswrapper[4746]: I1211 09:55:06.948622 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:06Z","lastTransitionTime":"2025-12-11T09:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.050928 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.051000 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.051013 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.051030 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.051064 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:07Z","lastTransitionTime":"2025-12-11T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.153474 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.153524 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.153536 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.153553 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.153566 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:07Z","lastTransitionTime":"2025-12-11T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.257233 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.257273 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.257285 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.257323 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.257334 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:07Z","lastTransitionTime":"2025-12-11T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.359343 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.359390 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.359405 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.359424 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.359435 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:07Z","lastTransitionTime":"2025-12-11T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.461593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.461636 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.461645 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.461664 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.461681 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:07Z","lastTransitionTime":"2025-12-11T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.563704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.563740 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.563748 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.563763 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.563772 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:07Z","lastTransitionTime":"2025-12-11T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.629333 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:07 crc kubenswrapper[4746]: E1211 09:55:07.629472 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.648566 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.663849 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.665559 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.665610 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.665628 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.665647 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.665658 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:07Z","lastTransitionTime":"2025-12-11T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.681080 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.690616 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b33732d-642a-461e-929c-6c69e9b4a3d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319e929376197a4ad59aba9f64c20474d60de480e41f84cc01de5fbc7fa3bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4198aee534e1b263316ac333389124e1416f15b759253ea951e004336f2b86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4198aee534e1b263316ac333389124e1416f15b759253ea951e004336f2b86ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.701986 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.722909 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"message\\\":\\\"utate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 09:54:41.930797 6374 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1211 09:54:41.932734 6374 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:41.932775 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 09:54:41.932842 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.735271 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:59Z\\\",\\\"message\\\":\\\"2025-12-11T09:54:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec997725-6a86-46b4-a40f-43a3a4321fe1\\\\n2025-12-11T09:54:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec997725-6a86-46b4-a40f-43a3a4321fe1 to /host/opt/cni/bin/\\\\n2025-12-11T09:54:14Z [verbose] multus-daemon started\\\\n2025-12-11T09:54:14Z [verbose] Readiness Indicator file check\\\\n2025-12-11T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.744362 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.755296 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.768386 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.768426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.768438 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.768453 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.768465 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:07Z","lastTransitionTime":"2025-12-11T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.770424 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.782079 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f86bd9b-2d54-4712-ad26-0a49e1b146dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffb65884bd374a1f68af73bd917b4b030ca7c890d5dbdeace452d63bcfc8d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d497e87b108ba839d40a853ccdf1c277759bd4b987852da432b7f876d1ce1890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a1e1264ef536cc9003210b7d9fc47faa3d6b2b80bf328fb0fafddb91e1ad5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.795531 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.807732 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.820303 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.831259 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.846767 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.857511 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.867610 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:07Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.870880 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.870911 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.870923 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.870937 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.870947 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:07Z","lastTransitionTime":"2025-12-11T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.987157 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.987200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.987211 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.987226 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:07 crc kubenswrapper[4746]: I1211 09:55:07.987240 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:07Z","lastTransitionTime":"2025-12-11T09:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.089832 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.089876 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.089887 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.089906 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.089917 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:08Z","lastTransitionTime":"2025-12-11T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.192411 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.192444 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.192456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.192472 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.192484 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:08Z","lastTransitionTime":"2025-12-11T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.294214 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.294239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.294247 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.294259 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.294267 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:08Z","lastTransitionTime":"2025-12-11T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.396768 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.396819 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.396832 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.396849 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.396864 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:08Z","lastTransitionTime":"2025-12-11T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.499370 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.499426 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.499442 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.499466 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.499482 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:08Z","lastTransitionTime":"2025-12-11T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.601310 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.601350 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.601363 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.601380 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.601392 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:08Z","lastTransitionTime":"2025-12-11T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.629764 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.629849 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:08 crc kubenswrapper[4746]: E1211 09:55:08.629893 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.629954 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:08 crc kubenswrapper[4746]: E1211 09:55:08.630127 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:08 crc kubenswrapper[4746]: E1211 09:55:08.630197 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.631016 4746 scope.go:117] "RemoveContainer" containerID="295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.705346 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.705396 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.705413 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.705435 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.705452 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:08Z","lastTransitionTime":"2025-12-11T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.807412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.807469 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.807487 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.807508 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:08 crc kubenswrapper[4746]: I1211 09:55:08.807526 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:08Z","lastTransitionTime":"2025-12-11T09:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.002868 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.002921 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.002935 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.002955 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.002973 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:09Z","lastTransitionTime":"2025-12-11T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.106263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.106291 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.106299 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.106311 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.106320 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:09Z","lastTransitionTime":"2025-12-11T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.209737 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.209778 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.209790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.209804 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.209961 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:09Z","lastTransitionTime":"2025-12-11T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.312625 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.312672 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.312682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.312698 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.312709 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:09Z","lastTransitionTime":"2025-12-11T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.415347 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.415393 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.415404 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.415422 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.415434 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:09Z","lastTransitionTime":"2025-12-11T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.517939 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.517988 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.517999 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.518014 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.518025 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:09Z","lastTransitionTime":"2025-12-11T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.620423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.620465 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.620480 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.620498 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.620736 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:09Z","lastTransitionTime":"2025-12-11T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.629963 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:09 crc kubenswrapper[4746]: E1211 09:55:09.630094 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.723502 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.723540 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.723551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.723566 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.723575 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:09Z","lastTransitionTime":"2025-12-11T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.826978 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.827024 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.827037 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.827073 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.827086 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:09Z","lastTransitionTime":"2025-12-11T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.933830 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.933889 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.933904 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.933922 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:09 crc kubenswrapper[4746]: I1211 09:55:09.933935 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:09Z","lastTransitionTime":"2025-12-11T09:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.036297 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.036337 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.036348 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.036364 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.036376 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:10Z","lastTransitionTime":"2025-12-11T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.042441 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/2.log" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.044955 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab"} Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.045947 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.059190 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.072846 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.087340 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.100074 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.112234 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b33732d-642a-461e-929c-6c69e9b4a3d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319e929376197a4ad59aba9f64c20474d60de480e41f84cc01de5fbc7fa3bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4198aee534e1b263316ac333389124e1416f15b759253ea951e004336f2b86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4198aee534e1b263316ac333389124e1416f15b759253ea951e004336f2b86ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.125702 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.139204 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.139241 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.139250 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.139266 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.139276 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:10Z","lastTransitionTime":"2025-12-11T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.141326 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.155268 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.167411 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.180247 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.194327 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.211492 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.303671 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.303705 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.303714 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.303726 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.303735 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:10Z","lastTransitionTime":"2025-12-11T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.308725 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"message\\\":\\\"utate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 09:54:41.930797 6374 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1211 09:54:41.932734 6374 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:41.932775 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 09:54:41.932842 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.330391 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:59Z\\\",\\\"message\\\":\\\"2025-12-11T09:54:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec997725-6a86-46b4-a40f-43a3a4321fe1\\\\n2025-12-11T09:54:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec997725-6a86-46b4-a40f-43a3a4321fe1 to /host/opt/cni/bin/\\\\n2025-12-11T09:54:14Z [verbose] multus-daemon started\\\\n2025-12-11T09:54:14Z [verbose] Readiness Indicator file check\\\\n2025-12-11T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.344674 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.358311 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f86bd9b-2d54-4712-ad26-0a49e1b146dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffb65884bd374a1f68af73bd917b4b030ca7c890d5dbdeace452d63bcfc8d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d497e87b108ba839d40a853ccdf1c277759bd4b987852da432b7f876d1ce1890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a1e1264ef536cc9003210b7d9fc47faa3d6b2b80bf328fb0fafddb91e1ad5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.373407 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.385553 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:10Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.406575 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.406616 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.406627 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.406643 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.406697 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:10Z","lastTransitionTime":"2025-12-11T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.509565 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.509609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.509618 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.509633 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.509646 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:10Z","lastTransitionTime":"2025-12-11T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.625875 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.625910 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.625918 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.625931 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.625941 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:10Z","lastTransitionTime":"2025-12-11T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.630165 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.630183 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:10 crc kubenswrapper[4746]: E1211 09:55:10.630276 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.630348 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:10 crc kubenswrapper[4746]: E1211 09:55:10.630573 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:10 crc kubenswrapper[4746]: E1211 09:55:10.630798 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.645947 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.727387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.727420 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.727430 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.727443 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.727454 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:10Z","lastTransitionTime":"2025-12-11T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.829251 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.829283 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.829294 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.829309 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.829321 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:10Z","lastTransitionTime":"2025-12-11T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.932067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.932111 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.932122 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.932137 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:10 crc kubenswrapper[4746]: I1211 09:55:10.932147 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:10Z","lastTransitionTime":"2025-12-11T09:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.034186 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.034219 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.034227 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.034239 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.034249 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:11Z","lastTransitionTime":"2025-12-11T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.136820 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.136885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.136903 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.136927 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.136946 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:11Z","lastTransitionTime":"2025-12-11T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.240243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.240480 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.240488 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.240501 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.240510 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:11Z","lastTransitionTime":"2025-12-11T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.342454 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.342510 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.342520 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.342533 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.342542 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:11Z","lastTransitionTime":"2025-12-11T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.444433 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.444503 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.444523 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.444549 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.444567 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:11Z","lastTransitionTime":"2025-12-11T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.547792 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.547844 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.547859 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.547885 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.547901 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:11Z","lastTransitionTime":"2025-12-11T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.629914 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:11 crc kubenswrapper[4746]: E1211 09:55:11.630102 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.650378 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.650439 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.650456 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.650478 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.650495 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:11Z","lastTransitionTime":"2025-12-11T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.753067 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.753102 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.753110 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.753123 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.753132 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:11Z","lastTransitionTime":"2025-12-11T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.854941 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.854989 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.855005 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.855028 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.855065 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:11Z","lastTransitionTime":"2025-12-11T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.957335 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.957375 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.957385 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.957403 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:11 crc kubenswrapper[4746]: I1211 09:55:11.957414 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:11Z","lastTransitionTime":"2025-12-11T09:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.060098 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.060169 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.060192 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.060224 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.060248 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:12Z","lastTransitionTime":"2025-12-11T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.061912 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/3.log" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.062784 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/2.log" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.066578 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" exitCode=1 Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.066614 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.066652 4746 scope.go:117] "RemoveContainer" containerID="295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.067980 4746 scope.go:117] "RemoveContainer" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.068330 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.087937 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90d0c4caf18159e482d43c217506738bfdbd87d7cf8de9744cc3fb1eb50a0454\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.105102 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70a89e1d-ff2b-4918-bae1-2f79d18396e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7d37ad935a6152865affdd56896f46b993e6f6a538c80a6875e9a255ec7314e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzchj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-mxwk6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.122190 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2a55e871-062f-43fd-a1e2-b2296474f4f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8z2lr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:24Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xh6zv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.149190 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"76a0ab76-58cf-42a8-80b6-070ab85c0859\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://63ebf675f4303f52f062fc79f3751366b3853122b49af45814f19399bcdb798f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7f5537cd0053c95a63c6cacc4cf0fb7137b16eb68fa1e330a6eafb601a73ad8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcce5c2f2a96abf7c2f9f937e3a7fffbf46ad2b8a0aa4d1398f9aaaf24cec19a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0204a90722254d27793131bdc8f28fc1843c69ea7e36d667fbfa8a634ebbd082\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f33fd3e77fc5e56bba4d7007672d1e95456a26bc2715154b74ba89d17daff102\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56c7c2aa376c17582924cc7177f101ff4540b61913daeefbb126e6d3fe884dfd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56c7c2aa376c17582924cc7177f101ff4540b61913daeefbb126e6d3fe884dfd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f3842a712af1579cc18307fa2a8a688181aa6bb7ca33d07ddbb19b3f6caeda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4f3842a712af1579cc18307fa2a8a688181aa6bb7ca33d07ddbb19b3f6caeda\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://4c70f6bd91f285434c842b6bae9e5f75f4cb5dd7dff272c42e89286a4a7f0ebd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4c70f6bd91f285434c842b6bae9e5f75f4cb5dd7dff272c42e89286a4a7f0ebd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.162532 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.162573 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.162588 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.162609 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.162624 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:12Z","lastTransitionTime":"2025-12-11T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.164755 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31e93b7cfd4209333cdc7f7a1aa2147e00ee349f58b5ce17458a09a9d0c193c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.181285 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db43e12e9f505fedf6b24c2f218037fc7fe90d9e408e2635bb09d9b937103df5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.203570 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6c9cd07c-9f4b-41bb-b29b-db9411c64336\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8207544d707de3cbd07165055ea80b231a7a88260c61e59c6e93604e6c8b08ec\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e9d64aea98c2857c362c30ccdc2cbbb64150d359219a2549159f71c204d50a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://de7c9ec3f2e01dfbc3f12676fd6e32f80c9a9471c8073097d25741b6f502061b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8031b77a047fc08f6abfdc365eb2cf7a977e256744e8967eeaf32b0f0305ce86\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46ad888d8344ac4bb175982e6978ddbb210ecbe1788713524adcbb6910aa01fa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7e8654cf57353100b75d84f7cf070c0f63ca35eddb41400ff991cfd79eb0a736\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e117ac2af81d84b161b03654fa871b88005191ad1a1ab2479f997e4dfa1cf4bb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-phjqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vtfvl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.216567 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b33732d-642a-461e-929c-6c69e9b4a3d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://319e929376197a4ad59aba9f64c20474d60de480e41f84cc01de5fbc7fa3bd8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4198aee534e1b263316ac333389124e1416f15b759253ea951e004336f2b86ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4198aee534e1b263316ac333389124e1416f15b759253ea951e004336f2b86ae\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.229366 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.247096 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"014636cb-e768-4554-9556-460db2ebfdcb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://295c9c25840aa72ebd6b237a9b81fa0e66af5202010cc11b6342e8f5e47789b6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"message\\\":\\\"utate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:update Table:NAT Row:map[external_ip:192.168.126.11 logical_ip:10.217.0.59 options:{GoMap:map[stateless:false]} type:snat] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {dce28c51-c9f1-478b-97c8-7e209d6e7cbe}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:} {Op:mutate Table:Logical_Router Row:map[] Rows:[] Columns:[] Mutations:[{Column:nat Mutator:insert Value:{GoSet:[{GoUUID:dce28c51-c9f1-478b-97c8-7e209d6e7cbe}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1211 09:54:41.930797 6374 admin_network_policy_controller.go:133] Setting up event handlers for Admin Network Policy\\\\nI1211 09:54:41.932734 6374 ovnkube.go:599] Stopped ovnkube\\\\nI1211 09:54:41.932775 6374 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1211 09:54:41.932842 6374 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:55:11Z\\\",\\\"message\\\":\\\"ialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:11Z is after 2025-08-24T17:21:41Z]\\\\nI1211 09:55:11.056285 6801 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1211 09:55:11.058340 6801 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1211 09:55:11.058356 6801 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/iptables-alerter-4ln5h in node crc\\\\nI1211 09:55:11.058367 6801 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-operator/iptables-alerter-4ln5h after 0 failed attempt(s)\\\\nI1211 09:55:11.058377 6801 default_network_controller.go:776] Recording success event on pod openshift-network-operator/iptables-alerter-4ln5h\\\\nI1211 09:55:11.056303 6801 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-49h5j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-w2s5z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.261530 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-r622c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"52ba00d9-b0ef-4496-a6b8-e170f405c592\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:55:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-11T09:54:59Z\\\",\\\"message\\\":\\\"2025-12-11T09:54:14+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ec997725-6a86-46b4-a40f-43a3a4321fe1\\\\n2025-12-11T09:54:14+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ec997725-6a86-46b4-a40f-43a3a4321fe1 to /host/opt/cni/bin/\\\\n2025-12-11T09:54:14Z [verbose] multus-daemon started\\\\n2025-12-11T09:54:14Z [verbose] Readiness Indicator file check\\\\n2025-12-11T09:54:59Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:55:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-spfrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-multus\"/\"multus-r622c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.264982 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.265011 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.265019 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.265034 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.265058 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:12Z","lastTransitionTime":"2025-12-11T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.273205 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kbcxr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5e883e78-75ce-40ce-90f5-41d1e355ef02\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a58f4ac06a9fab45a900234fc233a38c6e5d98716fbef5f19d7f91c8e53051\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vsdjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:13Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kbcxr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.286543 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"770e7df6-7594-4c7a-8a3a-a7948e532da7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8832c4e289c47bf5ec6be0285e9ef56514d637fc2901d67959eda4e7c6861126\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://51756d637dfd34e4eacc4f016219576937bcb063378f40943fbef7aea387db50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zltsx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:23Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-tgzfv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.303628 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0f95af3-00dc-44a8-98d1-a870f2276f19\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1211 09:54:01.537656 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1211 09:54:01.538354 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416189075/tls.crt::/tmp/serving-cert-2416189075/tls.key\\\\\\\"\\\\nI1211 09:54:07.892353 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1211 09:54:07.896727 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1211 09:54:07.896746 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1211 09:54:07.896781 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1211 09:54:07.896785 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1211 09:54:07.900633 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1211 09:54:07.900665 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900670 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1211 09:54:07.900674 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1211 09:54:07.900677 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1211 09:54:07.900680 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1211 09:54:07.900683 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1211 09:54:07.900764 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1211 09:54:07.901972 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:51Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.317600 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-6jfmh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"58f9b204-f3da-4add-be36-d1be33351d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a72074c93cdbb7d1ec66261c75bceb6d248d122e97790a728604e613fbdd0db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:54:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s8cr2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:54:10Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-6jfmh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.331933 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.346832 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.358084 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"29b2b550-11c2-4d8f-bb4b-f5c129957d46\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9c07e95ab999191338ce104a774b5a61a6b1b9f3197493a47426ca018f6348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d26fe44daa0b87848797e13b595122b4f6c045ecaa01e21225bda533b0fb83f5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7cea5c69539ebab8ab501821bc8493fd2f601f3979593e09ccd9d14bd4ef81ad\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.367361 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.367387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.367397 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.367409 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.367417 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:12Z","lastTransitionTime":"2025-12-11T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.370935 4746 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2f86bd9b-2d54-4712-ad26-0a49e1b146dd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-11T09:53:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ffb65884bd374a1f68af73bd917b4b030ca7c890d5dbdeace452d63bcfc8d99d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d497e87b108ba839d40a853ccdf1c277759bd4b987852da432b7f876d1ce1890\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://97a1e1264ef536cc9003210b7d9fc47faa3d6b2b80bf328fb0fafddb91e1ad5f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-11T09:53:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd09b32db5b29abc9410af0f3a1bcabc997d5bec621c0bc9c0e7997fc8bf081f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-11T09:53:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-11T09:53:48Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-11T09:53:47Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-11T09:55:12Z is after 2025-08-24T17:21:41Z" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.445762 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.445909 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:16.445891601 +0000 UTC m=+149.305754914 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.469997 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.470108 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.470133 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.470160 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.470181 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:12Z","lastTransitionTime":"2025-12-11T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.546636 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.546693 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.546726 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.546747 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.546828 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.546858 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.546872 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.546876 4746 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.546888 4746 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.546890 4746 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.546899 4746 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.546939 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-11 09:56:16.546923946 +0000 UTC m=+149.406787259 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.546954 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-11 09:56:16.546948426 +0000 UTC m=+149.406811739 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.546900 4746 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.547000 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:56:16.546973547 +0000 UTC m=+149.406836940 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.547025 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-11 09:56:16.547013018 +0000 UTC m=+149.406876441 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.572199 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.572273 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.572292 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.572314 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.572329 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:12Z","lastTransitionTime":"2025-12-11T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.630018 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.630128 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.630127 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.630230 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.630383 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:12 crc kubenswrapper[4746]: E1211 09:55:12.630512 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.674709 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.674775 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.674798 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.674826 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.674846 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:12Z","lastTransitionTime":"2025-12-11T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.777025 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.777113 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.777128 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.777145 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.777177 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:12Z","lastTransitionTime":"2025-12-11T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.880383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.880414 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.880423 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.880435 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.880445 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:12Z","lastTransitionTime":"2025-12-11T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.983479 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.983525 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.983541 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.983563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:12 crc kubenswrapper[4746]: I1211 09:55:12.983579 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:12Z","lastTransitionTime":"2025-12-11T09:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.072474 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/3.log" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.086323 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.086367 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.086379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.086398 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.086411 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:13Z","lastTransitionTime":"2025-12-11T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.189322 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.189434 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.189458 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.189486 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.189508 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:13Z","lastTransitionTime":"2025-12-11T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.293129 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.293204 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.293232 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.293263 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.293289 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:13Z","lastTransitionTime":"2025-12-11T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.395663 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.395704 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.395716 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.395733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.395745 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:13Z","lastTransitionTime":"2025-12-11T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.498506 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.498563 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.498576 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.498593 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.498606 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:13Z","lastTransitionTime":"2025-12-11T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.601504 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.601577 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.601599 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.601630 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.601649 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:13Z","lastTransitionTime":"2025-12-11T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.629834 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:13 crc kubenswrapper[4746]: E1211 09:55:13.630080 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.705243 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.705349 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.705411 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.705437 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.705495 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:13Z","lastTransitionTime":"2025-12-11T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.808459 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.808513 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.808527 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.808546 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.808557 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:13Z","lastTransitionTime":"2025-12-11T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.910685 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.910761 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.910785 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.910808 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:13 crc kubenswrapper[4746]: I1211 09:55:13.910826 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:13Z","lastTransitionTime":"2025-12-11T09:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.012950 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.013081 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.013095 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.013114 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.013160 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:14Z","lastTransitionTime":"2025-12-11T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.115773 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.115822 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.115834 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.115851 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.115864 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:14Z","lastTransitionTime":"2025-12-11T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.218682 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.218715 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.218724 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.218737 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.218749 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:14Z","lastTransitionTime":"2025-12-11T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.322450 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.322518 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.322542 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.322571 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.322631 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:14Z","lastTransitionTime":"2025-12-11T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.424673 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.424713 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.424722 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.424733 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.424743 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:14Z","lastTransitionTime":"2025-12-11T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.527753 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.527794 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.527812 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.527829 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.527840 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:14Z","lastTransitionTime":"2025-12-11T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.629919 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:14 crc kubenswrapper[4746]: E1211 09:55:14.630116 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.630106 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.630269 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.630387 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.630412 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.630421 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.630432 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.630440 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:14Z","lastTransitionTime":"2025-12-11T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:14 crc kubenswrapper[4746]: E1211 09:55:14.630368 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:14 crc kubenswrapper[4746]: E1211 09:55:14.630686 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.732316 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.732352 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.732361 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.732374 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.732385 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:14Z","lastTransitionTime":"2025-12-11T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.834920 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.834973 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.834998 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.835018 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.835032 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:14Z","lastTransitionTime":"2025-12-11T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.937626 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.937665 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.937675 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.937689 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:14 crc kubenswrapper[4746]: I1211 09:55:14.937699 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:14Z","lastTransitionTime":"2025-12-11T09:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.040304 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.040383 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.040401 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.040429 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.040451 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:15Z","lastTransitionTime":"2025-12-11T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.142985 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.143018 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.143026 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.143041 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.143073 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:15Z","lastTransitionTime":"2025-12-11T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.245331 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.245379 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.245391 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.245406 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.245415 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:15Z","lastTransitionTime":"2025-12-11T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.348601 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.348658 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.348669 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.348684 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.348693 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:15Z","lastTransitionTime":"2025-12-11T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.451496 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.451555 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.451573 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.451598 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.451621 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:15Z","lastTransitionTime":"2025-12-11T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.555109 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.555200 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.555220 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.555248 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.555266 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:15Z","lastTransitionTime":"2025-12-11T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.629599 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:15 crc kubenswrapper[4746]: E1211 09:55:15.629999 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.658127 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.658178 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.658189 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.658237 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.658250 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:15Z","lastTransitionTime":"2025-12-11T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.760897 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.760934 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.760945 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.760960 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.760969 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:15Z","lastTransitionTime":"2025-12-11T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.863746 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.863790 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.863800 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.863813 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.863822 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:15Z","lastTransitionTime":"2025-12-11T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.966670 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.966743 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.966760 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.966783 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:15 crc kubenswrapper[4746]: I1211 09:55:15.966800 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:15Z","lastTransitionTime":"2025-12-11T09:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.070472 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.070539 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.070551 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.070569 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.070581 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:16Z","lastTransitionTime":"2025-12-11T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.174250 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.174321 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.174335 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.174351 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.174363 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:16Z","lastTransitionTime":"2025-12-11T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.187440 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.187548 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.187567 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.187594 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.187612 4746 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-11T09:55:16Z","lastTransitionTime":"2025-12-11T09:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.276788 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q"] Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.277418 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.281825 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.281974 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.283959 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.285521 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.292145 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.292198 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.292240 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.292257 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.292276 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.313927 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=6.313909714 podStartE2EDuration="6.313909714s" podCreationTimestamp="2025-12-11 09:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.311345446 +0000 UTC m=+89.171208759" watchObservedRunningTime="2025-12-11 09:55:16.313909714 +0000 UTC m=+89.173773027" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.372709 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podStartSLOduration=66.37268901 podStartE2EDuration="1m6.37268901s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.350459432 +0000 UTC m=+89.210322755" watchObservedRunningTime="2025-12-11 09:55:16.37268901 +0000 UTC m=+89.232552353" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.393814 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.394156 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.394664 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.394893 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.396161 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.393976 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.396029 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-service-ca\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.394811 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.399234 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.399220413 podStartE2EDuration="16.399220413s" podCreationTimestamp="2025-12-11 09:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.398791452 +0000 UTC m=+89.258654765" watchObservedRunningTime="2025-12-11 09:55:16.399220413 +0000 UTC m=+89.259083726" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.409650 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.422892 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0fceb43a-bd83-4d81-8ccb-1113fe02fa6b-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-xl86q\" (UID: \"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.453996 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-vtfvl" podStartSLOduration=66.453968952 podStartE2EDuration="1m6.453968952s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.453817068 +0000 UTC m=+89.313680421" watchObservedRunningTime="2025-12-11 09:55:16.453968952 +0000 UTC m=+89.313832265" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.473118 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.473091079 podStartE2EDuration="1m8.473091079s" podCreationTimestamp="2025-12-11 09:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.470702335 +0000 UTC m=+89.330565648" watchObservedRunningTime="2025-12-11 09:55:16.473091079 +0000 UTC m=+89.332954392" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.483150 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-6jfmh" podStartSLOduration=67.483124874 podStartE2EDuration="1m7.483124874s" podCreationTimestamp="2025-12-11 09:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.481864471 +0000 UTC m=+89.341727784" watchObservedRunningTime="2025-12-11 09:55:16.483124874 +0000 UTC m=+89.342988197" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.520007 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-r622c" podStartSLOduration=66.51998437 podStartE2EDuration="1m6.51998437s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.519168769 +0000 UTC m=+89.379032092" watchObservedRunningTime="2025-12-11 09:55:16.51998437 +0000 UTC m=+89.379847683" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.528675 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kbcxr" podStartSLOduration=67.52865368 podStartE2EDuration="1m7.52865368s" podCreationTimestamp="2025-12-11 09:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.52863846 +0000 UTC m=+89.388501773" watchObservedRunningTime="2025-12-11 09:55:16.52865368 +0000 UTC m=+89.388516993" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.541614 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-tgzfv" podStartSLOduration=66.541591472 podStartE2EDuration="1m6.541591472s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.540975655 +0000 UTC m=+89.400838968" watchObservedRunningTime="2025-12-11 09:55:16.541591472 +0000 UTC m=+89.401454785" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.553892 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.553867657 podStartE2EDuration="1m8.553867657s" podCreationTimestamp="2025-12-11 09:54:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.553314742 +0000 UTC m=+89.413178055" watchObservedRunningTime="2025-12-11 09:55:16.553867657 +0000 UTC m=+89.413730960" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.567989 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=35.567971941 podStartE2EDuration="35.567971941s" podCreationTimestamp="2025-12-11 09:54:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:16.566492942 +0000 UTC m=+89.426356255" watchObservedRunningTime="2025-12-11 09:55:16.567971941 +0000 UTC m=+89.427835244" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.591695 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.629428 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.629576 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:16 crc kubenswrapper[4746]: E1211 09:55:16.629751 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:16 crc kubenswrapper[4746]: I1211 09:55:16.629798 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:16 crc kubenswrapper[4746]: E1211 09:55:16.629879 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:16 crc kubenswrapper[4746]: E1211 09:55:16.629996 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:17 crc kubenswrapper[4746]: I1211 09:55:17.088948 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" event={"ID":"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b","Type":"ContainerStarted","Data":"790c4518ca41a56a0a7cdd5ee4884113bfb6bd733ee119928a5fc1c50fc41dd3"} Dec 11 09:55:17 crc kubenswrapper[4746]: I1211 09:55:17.088991 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" event={"ID":"0fceb43a-bd83-4d81-8ccb-1113fe02fa6b","Type":"ContainerStarted","Data":"7fe9fa0b1d80b63f947d7f91635b29082c3ee1d0760a7a96271b3eff2cc95ac1"} Dec 11 09:55:17 crc kubenswrapper[4746]: I1211 09:55:17.105079 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-xl86q" podStartSLOduration=67.105059229 podStartE2EDuration="1m7.105059229s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:17.102861992 +0000 UTC m=+89.962725305" watchObservedRunningTime="2025-12-11 09:55:17.105059229 +0000 UTC m=+89.964922542" Dec 11 09:55:17 crc kubenswrapper[4746]: I1211 09:55:17.629742 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:17 crc kubenswrapper[4746]: E1211 09:55:17.631039 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:18 crc kubenswrapper[4746]: I1211 09:55:18.629824 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:18 crc kubenswrapper[4746]: I1211 09:55:18.629987 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:18 crc kubenswrapper[4746]: I1211 09:55:18.631137 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:18 crc kubenswrapper[4746]: E1211 09:55:18.631243 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:18 crc kubenswrapper[4746]: E1211 09:55:18.631311 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:18 crc kubenswrapper[4746]: E1211 09:55:18.631520 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:19 crc kubenswrapper[4746]: I1211 09:55:19.631406 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:19 crc kubenswrapper[4746]: E1211 09:55:19.632315 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:20 crc kubenswrapper[4746]: I1211 09:55:20.630126 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:20 crc kubenswrapper[4746]: I1211 09:55:20.630164 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:20 crc kubenswrapper[4746]: I1211 09:55:20.630275 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:20 crc kubenswrapper[4746]: E1211 09:55:20.630392 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:20 crc kubenswrapper[4746]: E1211 09:55:20.630558 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:20 crc kubenswrapper[4746]: E1211 09:55:20.630730 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:21 crc kubenswrapper[4746]: I1211 09:55:21.629646 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:21 crc kubenswrapper[4746]: E1211 09:55:21.630147 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:22 crc kubenswrapper[4746]: I1211 09:55:22.629951 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:22 crc kubenswrapper[4746]: I1211 09:55:22.630095 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:22 crc kubenswrapper[4746]: E1211 09:55:22.630147 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:22 crc kubenswrapper[4746]: E1211 09:55:22.630252 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:22 crc kubenswrapper[4746]: I1211 09:55:22.630316 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:22 crc kubenswrapper[4746]: E1211 09:55:22.630395 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:22 crc kubenswrapper[4746]: I1211 09:55:22.631845 4746 scope.go:117] "RemoveContainer" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 09:55:22 crc kubenswrapper[4746]: E1211 09:55:22.632169 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" Dec 11 09:55:23 crc kubenswrapper[4746]: I1211 09:55:23.630456 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:23 crc kubenswrapper[4746]: E1211 09:55:23.630627 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:24 crc kubenswrapper[4746]: I1211 09:55:24.629745 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:24 crc kubenswrapper[4746]: I1211 09:55:24.629819 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:24 crc kubenswrapper[4746]: I1211 09:55:24.629819 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:24 crc kubenswrapper[4746]: E1211 09:55:24.629951 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:24 crc kubenswrapper[4746]: E1211 09:55:24.630196 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:24 crc kubenswrapper[4746]: E1211 09:55:24.630314 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:25 crc kubenswrapper[4746]: I1211 09:55:25.629741 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:25 crc kubenswrapper[4746]: E1211 09:55:25.629897 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:26 crc kubenswrapper[4746]: I1211 09:55:26.629381 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:26 crc kubenswrapper[4746]: I1211 09:55:26.629605 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:26 crc kubenswrapper[4746]: E1211 09:55:26.629703 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:26 crc kubenswrapper[4746]: E1211 09:55:26.629924 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:26 crc kubenswrapper[4746]: I1211 09:55:26.630671 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:26 crc kubenswrapper[4746]: E1211 09:55:26.630986 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:27 crc kubenswrapper[4746]: I1211 09:55:27.630209 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:27 crc kubenswrapper[4746]: E1211 09:55:27.635865 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:28 crc kubenswrapper[4746]: I1211 09:55:28.463017 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:28 crc kubenswrapper[4746]: E1211 09:55:28.463158 4746 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:55:28 crc kubenswrapper[4746]: E1211 09:55:28.463212 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs podName:2a55e871-062f-43fd-a1e2-b2296474f4f3 nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.463195298 +0000 UTC m=+165.323058621 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs") pod "network-metrics-daemon-xh6zv" (UID: "2a55e871-062f-43fd-a1e2-b2296474f4f3") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 11 09:55:28 crc kubenswrapper[4746]: I1211 09:55:28.629860 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:28 crc kubenswrapper[4746]: I1211 09:55:28.630037 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:28 crc kubenswrapper[4746]: I1211 09:55:28.630097 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:28 crc kubenswrapper[4746]: E1211 09:55:28.630584 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:28 crc kubenswrapper[4746]: E1211 09:55:28.630727 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:28 crc kubenswrapper[4746]: E1211 09:55:28.630908 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:29 crc kubenswrapper[4746]: I1211 09:55:29.630096 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:29 crc kubenswrapper[4746]: E1211 09:55:29.630297 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:30 crc kubenswrapper[4746]: I1211 09:55:30.629661 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:30 crc kubenswrapper[4746]: I1211 09:55:30.629742 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:30 crc kubenswrapper[4746]: I1211 09:55:30.629679 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:30 crc kubenswrapper[4746]: E1211 09:55:30.629981 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:30 crc kubenswrapper[4746]: E1211 09:55:30.629812 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:30 crc kubenswrapper[4746]: E1211 09:55:30.630221 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:31 crc kubenswrapper[4746]: I1211 09:55:31.630370 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:31 crc kubenswrapper[4746]: E1211 09:55:31.630657 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:32 crc kubenswrapper[4746]: I1211 09:55:32.629640 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:32 crc kubenswrapper[4746]: I1211 09:55:32.629957 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:32 crc kubenswrapper[4746]: I1211 09:55:32.629968 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:32 crc kubenswrapper[4746]: E1211 09:55:32.630118 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:32 crc kubenswrapper[4746]: E1211 09:55:32.630234 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:32 crc kubenswrapper[4746]: E1211 09:55:32.630336 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:33 crc kubenswrapper[4746]: I1211 09:55:33.630185 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:33 crc kubenswrapper[4746]: E1211 09:55:33.630319 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:34 crc kubenswrapper[4746]: I1211 09:55:34.629716 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:34 crc kubenswrapper[4746]: I1211 09:55:34.629791 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:34 crc kubenswrapper[4746]: I1211 09:55:34.629752 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:34 crc kubenswrapper[4746]: E1211 09:55:34.629963 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:34 crc kubenswrapper[4746]: E1211 09:55:34.630108 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:34 crc kubenswrapper[4746]: E1211 09:55:34.630236 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:35 crc kubenswrapper[4746]: I1211 09:55:35.629462 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:35 crc kubenswrapper[4746]: E1211 09:55:35.629658 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:36 crc kubenswrapper[4746]: I1211 09:55:36.629622 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:36 crc kubenswrapper[4746]: I1211 09:55:36.629703 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:36 crc kubenswrapper[4746]: I1211 09:55:36.629636 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:36 crc kubenswrapper[4746]: E1211 09:55:36.629778 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:36 crc kubenswrapper[4746]: E1211 09:55:36.629893 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:36 crc kubenswrapper[4746]: E1211 09:55:36.630335 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:36 crc kubenswrapper[4746]: I1211 09:55:36.630592 4746 scope.go:117] "RemoveContainer" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 09:55:36 crc kubenswrapper[4746]: E1211 09:55:36.630719 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" Dec 11 09:55:37 crc kubenswrapper[4746]: I1211 09:55:37.630020 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:37 crc kubenswrapper[4746]: E1211 09:55:37.631034 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:38 crc kubenswrapper[4746]: I1211 09:55:38.630198 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:38 crc kubenswrapper[4746]: I1211 09:55:38.630244 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:38 crc kubenswrapper[4746]: I1211 09:55:38.630304 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:38 crc kubenswrapper[4746]: E1211 09:55:38.630451 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:38 crc kubenswrapper[4746]: E1211 09:55:38.630513 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:38 crc kubenswrapper[4746]: E1211 09:55:38.630616 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:39 crc kubenswrapper[4746]: I1211 09:55:39.629558 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:39 crc kubenswrapper[4746]: E1211 09:55:39.629800 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:40 crc kubenswrapper[4746]: I1211 09:55:40.630030 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:40 crc kubenswrapper[4746]: I1211 09:55:40.630147 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:40 crc kubenswrapper[4746]: I1211 09:55:40.630147 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:40 crc kubenswrapper[4746]: E1211 09:55:40.630266 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:40 crc kubenswrapper[4746]: E1211 09:55:40.630436 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:40 crc kubenswrapper[4746]: E1211 09:55:40.630693 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:41 crc kubenswrapper[4746]: I1211 09:55:41.630375 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:41 crc kubenswrapper[4746]: E1211 09:55:41.630564 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:42 crc kubenswrapper[4746]: I1211 09:55:42.630285 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:42 crc kubenswrapper[4746]: I1211 09:55:42.630342 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:42 crc kubenswrapper[4746]: E1211 09:55:42.630481 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:42 crc kubenswrapper[4746]: I1211 09:55:42.630563 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:42 crc kubenswrapper[4746]: E1211 09:55:42.630772 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:42 crc kubenswrapper[4746]: E1211 09:55:42.630918 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:43 crc kubenswrapper[4746]: I1211 09:55:43.629633 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:43 crc kubenswrapper[4746]: E1211 09:55:43.629762 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:44 crc kubenswrapper[4746]: I1211 09:55:44.629683 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:44 crc kubenswrapper[4746]: I1211 09:55:44.629741 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:44 crc kubenswrapper[4746]: I1211 09:55:44.629761 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:44 crc kubenswrapper[4746]: E1211 09:55:44.629830 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:44 crc kubenswrapper[4746]: E1211 09:55:44.629891 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:44 crc kubenswrapper[4746]: E1211 09:55:44.629989 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:45 crc kubenswrapper[4746]: I1211 09:55:45.630414 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:45 crc kubenswrapper[4746]: E1211 09:55:45.630581 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:46 crc kubenswrapper[4746]: I1211 09:55:46.189704 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/1.log" Dec 11 09:55:46 crc kubenswrapper[4746]: I1211 09:55:46.190195 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/0.log" Dec 11 09:55:46 crc kubenswrapper[4746]: I1211 09:55:46.190238 4746 generic.go:334] "Generic (PLEG): container finished" podID="52ba00d9-b0ef-4496-a6b8-e170f405c592" containerID="2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467" exitCode=1 Dec 11 09:55:46 crc kubenswrapper[4746]: I1211 09:55:46.190267 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r622c" event={"ID":"52ba00d9-b0ef-4496-a6b8-e170f405c592","Type":"ContainerDied","Data":"2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467"} Dec 11 09:55:46 crc kubenswrapper[4746]: I1211 09:55:46.190298 4746 scope.go:117] "RemoveContainer" containerID="0a31ee8e4bd8e6c9af9f412a675baabf431c825af576e48d451489897f81dade" Dec 11 09:55:46 crc kubenswrapper[4746]: I1211 09:55:46.190663 4746 scope.go:117] "RemoveContainer" containerID="2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467" Dec 11 09:55:46 crc kubenswrapper[4746]: E1211 09:55:46.190828 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-r622c_openshift-multus(52ba00d9-b0ef-4496-a6b8-e170f405c592)\"" pod="openshift-multus/multus-r622c" podUID="52ba00d9-b0ef-4496-a6b8-e170f405c592" Dec 11 09:55:46 crc kubenswrapper[4746]: I1211 09:55:46.630181 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:46 crc kubenswrapper[4746]: I1211 09:55:46.630185 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:46 crc kubenswrapper[4746]: E1211 09:55:46.630704 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:46 crc kubenswrapper[4746]: E1211 09:55:46.630854 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:46 crc kubenswrapper[4746]: I1211 09:55:46.630231 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:46 crc kubenswrapper[4746]: E1211 09:55:46.630999 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:47 crc kubenswrapper[4746]: I1211 09:55:47.195682 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/1.log" Dec 11 09:55:47 crc kubenswrapper[4746]: E1211 09:55:47.602028 4746 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 11 09:55:47 crc kubenswrapper[4746]: I1211 09:55:47.629751 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:47 crc kubenswrapper[4746]: E1211 09:55:47.631744 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:47 crc kubenswrapper[4746]: I1211 09:55:47.632957 4746 scope.go:117] "RemoveContainer" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 09:55:47 crc kubenswrapper[4746]: E1211 09:55:47.633274 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-w2s5z_openshift-ovn-kubernetes(014636cb-e768-4554-9556-460db2ebfdcb)\"" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" Dec 11 09:55:47 crc kubenswrapper[4746]: E1211 09:55:47.766147 4746 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 09:55:48 crc kubenswrapper[4746]: I1211 09:55:48.629796 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:48 crc kubenswrapper[4746]: I1211 09:55:48.629878 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:48 crc kubenswrapper[4746]: I1211 09:55:48.629937 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:48 crc kubenswrapper[4746]: E1211 09:55:48.629977 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:48 crc kubenswrapper[4746]: E1211 09:55:48.630112 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:48 crc kubenswrapper[4746]: E1211 09:55:48.630270 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:49 crc kubenswrapper[4746]: I1211 09:55:49.629553 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:49 crc kubenswrapper[4746]: E1211 09:55:49.629692 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:50 crc kubenswrapper[4746]: I1211 09:55:50.630192 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:50 crc kubenswrapper[4746]: I1211 09:55:50.630264 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:50 crc kubenswrapper[4746]: I1211 09:55:50.630289 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:50 crc kubenswrapper[4746]: E1211 09:55:50.631645 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:50 crc kubenswrapper[4746]: E1211 09:55:50.631724 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:50 crc kubenswrapper[4746]: E1211 09:55:50.631808 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:51 crc kubenswrapper[4746]: I1211 09:55:51.629458 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:51 crc kubenswrapper[4746]: E1211 09:55:51.629690 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:52 crc kubenswrapper[4746]: I1211 09:55:52.630220 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:52 crc kubenswrapper[4746]: I1211 09:55:52.630368 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:52 crc kubenswrapper[4746]: E1211 09:55:52.630495 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:52 crc kubenswrapper[4746]: I1211 09:55:52.630238 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:52 crc kubenswrapper[4746]: E1211 09:55:52.630593 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:52 crc kubenswrapper[4746]: E1211 09:55:52.630798 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:52 crc kubenswrapper[4746]: E1211 09:55:52.767962 4746 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 09:55:53 crc kubenswrapper[4746]: I1211 09:55:53.629727 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:53 crc kubenswrapper[4746]: E1211 09:55:53.630037 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:54 crc kubenswrapper[4746]: I1211 09:55:54.629884 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:54 crc kubenswrapper[4746]: I1211 09:55:54.629893 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:54 crc kubenswrapper[4746]: I1211 09:55:54.629905 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:54 crc kubenswrapper[4746]: E1211 09:55:54.630294 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:54 crc kubenswrapper[4746]: E1211 09:55:54.630358 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:54 crc kubenswrapper[4746]: E1211 09:55:54.630065 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:55 crc kubenswrapper[4746]: I1211 09:55:55.630984 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:55 crc kubenswrapper[4746]: E1211 09:55:55.631142 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:56 crc kubenswrapper[4746]: I1211 09:55:56.629487 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:56 crc kubenswrapper[4746]: I1211 09:55:56.629536 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:56 crc kubenswrapper[4746]: I1211 09:55:56.629600 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:56 crc kubenswrapper[4746]: E1211 09:55:56.629654 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:56 crc kubenswrapper[4746]: E1211 09:55:56.629805 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:56 crc kubenswrapper[4746]: E1211 09:55:56.629990 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:57 crc kubenswrapper[4746]: I1211 09:55:57.630113 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:57 crc kubenswrapper[4746]: E1211 09:55:57.631194 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:57 crc kubenswrapper[4746]: E1211 09:55:57.769481 4746 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 09:55:58 crc kubenswrapper[4746]: I1211 09:55:58.630177 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:55:58 crc kubenswrapper[4746]: E1211 09:55:58.630353 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:55:58 crc kubenswrapper[4746]: I1211 09:55:58.630427 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:55:58 crc kubenswrapper[4746]: E1211 09:55:58.630861 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:55:58 crc kubenswrapper[4746]: I1211 09:55:58.631168 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:55:58 crc kubenswrapper[4746]: E1211 09:55:58.631292 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:55:58 crc kubenswrapper[4746]: I1211 09:55:58.631382 4746 scope.go:117] "RemoveContainer" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 09:55:59 crc kubenswrapper[4746]: I1211 09:55:59.234847 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/3.log" Dec 11 09:55:59 crc kubenswrapper[4746]: I1211 09:55:59.238134 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerStarted","Data":"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5"} Dec 11 09:55:59 crc kubenswrapper[4746]: I1211 09:55:59.238496 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:55:59 crc kubenswrapper[4746]: I1211 09:55:59.268338 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podStartSLOduration=109.268318945 podStartE2EDuration="1m49.268318945s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:55:59.267620216 +0000 UTC m=+132.127483559" watchObservedRunningTime="2025-12-11 09:55:59.268318945 +0000 UTC m=+132.128182258" Dec 11 09:55:59 crc kubenswrapper[4746]: I1211 09:55:59.629628 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:55:59 crc kubenswrapper[4746]: E1211 09:55:59.629779 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:55:59 crc kubenswrapper[4746]: I1211 09:55:59.949004 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xh6zv"] Dec 11 09:56:00 crc kubenswrapper[4746]: I1211 09:56:00.241153 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:56:00 crc kubenswrapper[4746]: E1211 09:56:00.241256 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:56:00 crc kubenswrapper[4746]: I1211 09:56:00.630268 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:56:00 crc kubenswrapper[4746]: I1211 09:56:00.630327 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:56:00 crc kubenswrapper[4746]: I1211 09:56:00.630334 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:56:00 crc kubenswrapper[4746]: E1211 09:56:00.630401 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:56:00 crc kubenswrapper[4746]: E1211 09:56:00.630594 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:56:00 crc kubenswrapper[4746]: E1211 09:56:00.630672 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:56:01 crc kubenswrapper[4746]: I1211 09:56:01.629782 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:56:01 crc kubenswrapper[4746]: I1211 09:56:01.629920 4746 scope.go:117] "RemoveContainer" containerID="2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467" Dec 11 09:56:01 crc kubenswrapper[4746]: E1211 09:56:01.630058 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:56:02 crc kubenswrapper[4746]: I1211 09:56:02.249591 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/1.log" Dec 11 09:56:02 crc kubenswrapper[4746]: I1211 09:56:02.249903 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r622c" event={"ID":"52ba00d9-b0ef-4496-a6b8-e170f405c592","Type":"ContainerStarted","Data":"d352579c684a02b9fd849e08b256a881f7ee136d38731825f95f347ab33b36a1"} Dec 11 09:56:02 crc kubenswrapper[4746]: I1211 09:56:02.630209 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:56:02 crc kubenswrapper[4746]: I1211 09:56:02.630324 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:56:02 crc kubenswrapper[4746]: E1211 09:56:02.630334 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:56:02 crc kubenswrapper[4746]: E1211 09:56:02.630496 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:56:02 crc kubenswrapper[4746]: I1211 09:56:02.630514 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:56:02 crc kubenswrapper[4746]: E1211 09:56:02.630641 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:56:02 crc kubenswrapper[4746]: E1211 09:56:02.770643 4746 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 11 09:56:03 crc kubenswrapper[4746]: I1211 09:56:03.629668 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:56:03 crc kubenswrapper[4746]: E1211 09:56:03.629821 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:56:04 crc kubenswrapper[4746]: I1211 09:56:04.629905 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:56:04 crc kubenswrapper[4746]: I1211 09:56:04.630004 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:56:04 crc kubenswrapper[4746]: I1211 09:56:04.629931 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:56:04 crc kubenswrapper[4746]: E1211 09:56:04.630124 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:56:04 crc kubenswrapper[4746]: E1211 09:56:04.630295 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:56:04 crc kubenswrapper[4746]: E1211 09:56:04.630454 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:56:05 crc kubenswrapper[4746]: I1211 09:56:05.629978 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:56:05 crc kubenswrapper[4746]: E1211 09:56:05.630148 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:56:06 crc kubenswrapper[4746]: I1211 09:56:06.629595 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:56:06 crc kubenswrapper[4746]: I1211 09:56:06.629663 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:56:06 crc kubenswrapper[4746]: E1211 09:56:06.629724 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 11 09:56:06 crc kubenswrapper[4746]: I1211 09:56:06.629604 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:56:06 crc kubenswrapper[4746]: E1211 09:56:06.629798 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 11 09:56:06 crc kubenswrapper[4746]: E1211 09:56:06.629877 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 11 09:56:07 crc kubenswrapper[4746]: I1211 09:56:07.630219 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:56:07 crc kubenswrapper[4746]: E1211 09:56:07.632118 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xh6zv" podUID="2a55e871-062f-43fd-a1e2-b2296474f4f3" Dec 11 09:56:08 crc kubenswrapper[4746]: I1211 09:56:08.630318 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:56:08 crc kubenswrapper[4746]: I1211 09:56:08.630316 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:56:08 crc kubenswrapper[4746]: I1211 09:56:08.630318 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:56:08 crc kubenswrapper[4746]: I1211 09:56:08.633738 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 09:56:08 crc kubenswrapper[4746]: I1211 09:56:08.633793 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 09:56:08 crc kubenswrapper[4746]: I1211 09:56:08.633848 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 09:56:08 crc kubenswrapper[4746]: I1211 09:56:08.633888 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 09:56:09 crc kubenswrapper[4746]: I1211 09:56:09.630354 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:56:09 crc kubenswrapper[4746]: I1211 09:56:09.632297 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 09:56:09 crc kubenswrapper[4746]: I1211 09:56:09.632297 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.485313 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:16 crc kubenswrapper[4746]: E1211 09:56:16.485544 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:58:18.485512285 +0000 UTC m=+271.345375638 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.587079 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.587205 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.587314 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.587422 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.588721 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.594289 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.595018 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.595708 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.743913 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.750354 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 11 09:56:16 crc kubenswrapper[4746]: I1211 09:56:16.755346 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:56:16 crc kubenswrapper[4746]: W1211 09:56:16.995490 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-d1656206b3254b3651493d0eb9ece9d1f22aefa06558ffbf172ea435edf1dd25 WatchSource:0}: Error finding container d1656206b3254b3651493d0eb9ece9d1f22aefa06558ffbf172ea435edf1dd25: Status 404 returned error can't find the container with id d1656206b3254b3651493d0eb9ece9d1f22aefa06558ffbf172ea435edf1dd25 Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.300973 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6b8bff3f5a1335ea334d380c8486c9f45ef74169231a2ab727f79122e213a85d"} Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.301363 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"abcc269d35959ff8c7e4829a284c35c506d5012990f5e6b9ac0bfe91660f6fe5"} Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.302663 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d9ef68206e7dce20851a10700655a327cccae0486068c730b7885abec78dcddb"} Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.302716 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e046f30575d5afd7693acc3ce552d2456d0eb4a8028fba9935e2bd9baffe51b8"} Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.302862 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.304509 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7f7dc5a0e5c64d84589a51a828da62473cb64900dc8fbca2f7d072c40cd73693"} Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.304556 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"d1656206b3254b3651493d0eb9ece9d1f22aefa06558ffbf172ea435edf1dd25"} Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.662982 4746 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.707806 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lwl94"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.708460 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.708648 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.709110 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.709953 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.710368 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.710671 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.715376 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.769449 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.769630 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.769735 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.769833 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.769929 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.770278 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.772182 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.772424 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.772657 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.772826 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.772947 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.773067 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.773205 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.773283 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.773368 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.773525 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.773535 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.773922 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8ctft"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.774365 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.779335 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.779500 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.781451 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.782898 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.783516 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.783720 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.783749 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.784383 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-4n677"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.784901 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.788090 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.788540 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.788847 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.789453 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.789728 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800402 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800455 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c29801-25a3-49f6-a915-a92376892daf-config\") pod \"kube-apiserver-operator-766d6c64bb-zkxvv\" (UID: \"69c29801-25a3-49f6-a915-a92376892daf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800496 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800529 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fce8473a-0f95-4788-af68-35b608885c41-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8ctft\" (UID: \"fce8473a-0f95-4788-af68-35b608885c41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800564 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1196114-7a7a-4f77-951a-20d10c32d0b2-config\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800605 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h85k\" (UniqueName: \"kubernetes.io/projected/b1196114-7a7a-4f77-951a-20d10c32d0b2-kube-api-access-7h85k\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800635 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800743 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a701598c-dc6e-418a-ad5e-3a93b8bc7f02-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g2qhz\" (UID: \"a701598c-dc6e-418a-ad5e-3a93b8bc7f02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800827 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800902 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800993 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.800939 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj955\" (UniqueName: \"kubernetes.io/projected/a701598c-dc6e-418a-ad5e-3a93b8bc7f02-kube-api-access-bj955\") pod \"openshift-apiserver-operator-796bbdcf4f-g2qhz\" (UID: \"a701598c-dc6e-418a-ad5e-3a93b8bc7f02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801027 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29gh4\" (UniqueName: \"kubernetes.io/projected/a124ef34-55e9-4974-95ba-4b55ac4ad5ce-kube-api-access-29gh4\") pod \"cluster-samples-operator-665b6dd947-6b7h4\" (UID: \"a124ef34-55e9-4974-95ba-4b55ac4ad5ce\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801199 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b1196114-7a7a-4f77-951a-20d10c32d0b2-images\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801223 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801340 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjzr4\" (UniqueName: \"kubernetes.io/projected/fce8473a-0f95-4788-af68-35b608885c41-kube-api-access-sjzr4\") pod \"openshift-config-operator-7777fb866f-8ctft\" (UID: \"fce8473a-0f95-4788-af68-35b608885c41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801459 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1196114-7a7a-4f77-951a-20d10c32d0b2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801476 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801500 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqcn4\" (UniqueName: \"kubernetes.io/projected/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-kube-api-access-hqcn4\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801597 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a124ef34-55e9-4974-95ba-4b55ac4ad5ce-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6b7h4\" (UID: \"a124ef34-55e9-4974-95ba-4b55ac4ad5ce\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801630 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69c29801-25a3-49f6-a915-a92376892daf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zkxvv\" (UID: \"69c29801-25a3-49f6-a915-a92376892daf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801661 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c29801-25a3-49f6-a915-a92376892daf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zkxvv\" (UID: \"69c29801-25a3-49f6-a915-a92376892daf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801734 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801728 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fce8473a-0f95-4788-af68-35b608885c41-serving-cert\") pod \"openshift-config-operator-7777fb866f-8ctft\" (UID: \"fce8473a-0f95-4788-af68-35b608885c41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801813 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a701598c-dc6e-418a-ad5e-3a93b8bc7f02-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g2qhz\" (UID: \"a701598c-dc6e-418a-ad5e-3a93b8bc7f02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.801831 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.802358 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.815815 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.816156 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.816303 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.816419 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.816575 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.816848 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-czfmv"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.817202 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pzmdh"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.817448 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.817661 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bsdlm"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.817739 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.818090 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.819236 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.823736 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.824929 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.825226 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.825520 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.825671 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.825882 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.826010 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8b44g"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.826745 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.829039 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.829126 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.829383 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.829506 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.829575 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.829686 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.829836 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.830017 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.830315 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-nqbcs"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.830742 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nqbcs" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.831253 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.833738 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.834166 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9qmb2"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.834368 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.834448 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.834600 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.834747 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.835610 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.836495 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.837550 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.837658 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.837681 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.837718 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.837792 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.837667 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.837897 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.837877 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.838070 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.838216 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.838236 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.838310 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.838326 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.838224 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.838641 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.838706 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.838820 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.839076 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.839185 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.839543 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.839904 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.839990 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.840198 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.840336 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.840397 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.841084 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-t5lpk"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.841374 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.841709 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.841737 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.841743 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.841988 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.842219 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.842267 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.842362 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.843034 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.843459 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.842388 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.847241 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.863891 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.864763 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.872787 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.873575 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.882571 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.882728 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.884145 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.884569 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.886880 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.886905 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.887067 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.890691 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.892210 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.892891 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7hm64"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.898646 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.898742 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.901142 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5bgx9"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.901377 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.901574 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.901808 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.902904 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.902940 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.903499 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.904419 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.904562 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.904720 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.906976 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907007 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.906145 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcct5\" (UniqueName: \"kubernetes.io/projected/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-kube-api-access-pcct5\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907257 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqcn4\" (UniqueName: \"kubernetes.io/projected/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-kube-api-access-hqcn4\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907281 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-oauth-serving-cert\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907303 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-config\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907321 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flqmx\" (UniqueName: \"kubernetes.io/projected/fda084f4-b624-4036-8da8-27d83af188ba-kube-api-access-flqmx\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907338 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-audit\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907359 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a124ef34-55e9-4974-95ba-4b55ac4ad5ce-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6b7h4\" (UID: \"a124ef34-55e9-4974-95ba-4b55ac4ad5ce\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907375 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907392 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907432 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69c29801-25a3-49f6-a915-a92376892daf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zkxvv\" (UID: \"69c29801-25a3-49f6-a915-a92376892daf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907472 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c29801-25a3-49f6-a915-a92376892daf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zkxvv\" (UID: \"69c29801-25a3-49f6-a915-a92376892daf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907504 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fce8473a-0f95-4788-af68-35b608885c41-serving-cert\") pod \"openshift-config-operator-7777fb866f-8ctft\" (UID: \"fce8473a-0f95-4788-af68-35b608885c41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.907693 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908105 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-dir\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908154 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/13c5b559-f24d-4e4e-a905-80a3da8dd577-auth-proxy-config\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908195 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b454ad-ff7c-4c7b-9d53-79b92b7520de-serving-cert\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908257 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-serving-cert\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908313 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-serving-cert\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908340 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-trusted-ca-bundle\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908361 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-etcd-serving-ca\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908414 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-oauth-config\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908436 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-serving-cert\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908466 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a701598c-dc6e-418a-ad5e-3a93b8bc7f02-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g2qhz\" (UID: \"a701598c-dc6e-418a-ad5e-3a93b8bc7f02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908489 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c5b559-f24d-4e4e-a905-80a3da8dd577-config\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908525 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksx79\" (UniqueName: \"kubernetes.io/projected/fcffdcae-c940-447e-8b52-e6ba9df066cd-kube-api-access-ksx79\") pod \"machine-config-controller-84d6567774-sxscv\" (UID: \"fcffdcae-c940-447e-8b52-e6ba9df066cd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908552 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908776 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908883 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908909 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcffdcae-c940-447e-8b52-e6ba9df066cd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sxscv\" (UID: \"fcffdcae-c940-447e-8b52-e6ba9df066cd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908933 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-client-ca\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908963 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c29801-25a3-49f6-a915-a92376892daf-config\") pod \"kube-apiserver-operator-766d6c64bb-zkxvv\" (UID: \"69c29801-25a3-49f6-a915-a92376892daf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.908985 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fce8473a-0f95-4788-af68-35b608885c41-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8ctft\" (UID: \"fce8473a-0f95-4788-af68-35b608885c41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.909007 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda084f4-b624-4036-8da8-27d83af188ba-serving-cert\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.909027 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcffdcae-c940-447e-8b52-e6ba9df066cd-proxy-tls\") pod \"machine-config-controller-84d6567774-sxscv\" (UID: \"fcffdcae-c940-447e-8b52-e6ba9df066cd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.909070 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1196114-7a7a-4f77-951a-20d10c32d0b2-config\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.909563 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/fce8473a-0f95-4788-af68-35b608885c41-available-featuregates\") pod \"openshift-config-operator-7777fb866f-8ctft\" (UID: \"fce8473a-0f95-4788-af68-35b608885c41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.909816 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69c29801-25a3-49f6-a915-a92376892daf-config\") pod \"kube-apiserver-operator-766d6c64bb-zkxvv\" (UID: \"69c29801-25a3-49f6-a915-a92376892daf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.909884 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1196114-7a7a-4f77-951a-20d10c32d0b2-config\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.910649 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.910793 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h85k\" (UniqueName: \"kubernetes.io/projected/b1196114-7a7a-4f77-951a-20d10c32d0b2-kube-api-access-7h85k\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.910845 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/13c5b559-f24d-4e4e-a905-80a3da8dd577-machine-approver-tls\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.910874 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911177 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgq54\" (UniqueName: \"kubernetes.io/projected/2d6e68f4-a35b-43d1-b1fb-95600add4933-kube-api-access-rgq54\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911206 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911228 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97b454ad-ff7c-4c7b-9d53-79b92b7520de-audit-dir\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911304 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a701598c-dc6e-418a-ad5e-3a93b8bc7f02-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g2qhz\" (UID: \"a701598c-dc6e-418a-ad5e-3a93b8bc7f02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911344 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911363 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-config\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911404 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfwcs\" (UniqueName: \"kubernetes.io/projected/49f71235-96fa-4452-8805-d08461253a1f-kube-api-access-nfwcs\") pod \"olm-operator-6b444d44fb-mdgqr\" (UID: \"49f71235-96fa-4452-8805-d08461253a1f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911436 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db7cc\" (UniqueName: \"kubernetes.io/projected/3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb-kube-api-access-db7cc\") pod \"control-plane-machine-set-operator-78cbb6b69f-4t7sz\" (UID: \"3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911474 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911491 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/97b454ad-ff7c-4c7b-9d53-79b92b7520de-node-pullsecrets\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911509 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/49f71235-96fa-4452-8805-d08461253a1f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mdgqr\" (UID: \"49f71235-96fa-4452-8805-d08461253a1f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911543 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911564 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj955\" (UniqueName: \"kubernetes.io/projected/a701598c-dc6e-418a-ad5e-3a93b8bc7f02-kube-api-access-bj955\") pod \"openshift-apiserver-operator-796bbdcf4f-g2qhz\" (UID: \"a701598c-dc6e-418a-ad5e-3a93b8bc7f02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911632 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-service-ca\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911649 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911666 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-config\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911684 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2rn8\" (UniqueName: \"kubernetes.io/projected/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-kube-api-access-s2rn8\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911719 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1fbd58c-6a34-456a-a1d6-854c25fb0d9f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-t5lpk\" (UID: \"d1fbd58c-6a34-456a-a1d6-854c25fb0d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911736 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-config\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911755 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29gh4\" (UniqueName: \"kubernetes.io/projected/a124ef34-55e9-4974-95ba-4b55ac4ad5ce-kube-api-access-29gh4\") pod \"cluster-samples-operator-665b6dd947-6b7h4\" (UID: \"a124ef34-55e9-4974-95ba-4b55ac4ad5ce\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911770 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911805 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4t7sz\" (UID: \"3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911825 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-service-ca-bundle\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911840 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b1196114-7a7a-4f77-951a-20d10c32d0b2-images\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911865 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fce8473a-0f95-4788-af68-35b608885c41-serving-cert\") pod \"openshift-config-operator-7777fb866f-8ctft\" (UID: \"fce8473a-0f95-4788-af68-35b608885c41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911878 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjzr4\" (UniqueName: \"kubernetes.io/projected/fce8473a-0f95-4788-af68-35b608885c41-kube-api-access-sjzr4\") pod \"openshift-config-operator-7777fb866f-8ctft\" (UID: \"fce8473a-0f95-4788-af68-35b608885c41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911897 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66hfp\" (UniqueName: \"kubernetes.io/projected/97b454ad-ff7c-4c7b-9d53-79b92b7520de-kube-api-access-66hfp\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911948 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/97b454ad-ff7c-4c7b-9d53-79b92b7520de-encryption-config\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911979 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.911997 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqql8\" (UniqueName: \"kubernetes.io/projected/6cae8380-85f7-4534-9bfc-46c5a3d6711f-kube-api-access-hqql8\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912029 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-image-import-ca\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912158 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-config\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912188 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-trusted-ca\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912236 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1196114-7a7a-4f77-951a-20d10c32d0b2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912266 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912289 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-policies\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912328 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912347 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f8jb\" (UniqueName: \"kubernetes.io/projected/13c5b559-f24d-4e4e-a905-80a3da8dd577-kube-api-access-2f8jb\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912431 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912478 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97b454ad-ff7c-4c7b-9d53-79b92b7520de-etcd-client\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912502 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2tg9\" (UniqueName: \"kubernetes.io/projected/d1fbd58c-6a34-456a-a1d6-854c25fb0d9f-kube-api-access-k2tg9\") pod \"multus-admission-controller-857f4d67dd-t5lpk\" (UID: \"d1fbd58c-6a34-456a-a1d6-854c25fb0d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912523 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/49f71235-96fa-4452-8805-d08461253a1f-srv-cert\") pod \"olm-operator-6b444d44fb-mdgqr\" (UID: \"49f71235-96fa-4452-8805-d08461253a1f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912567 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7gr5\" (UniqueName: \"kubernetes.io/projected/2895076f-4e51-4f1c-ae8b-e8e9d1b8888d-kube-api-access-b7gr5\") pod \"downloads-7954f5f757-nqbcs\" (UID: \"2895076f-4e51-4f1c-ae8b-e8e9d1b8888d\") " pod="openshift-console/downloads-7954f5f757-nqbcs" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.912594 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a701598c-dc6e-418a-ad5e-3a93b8bc7f02-config\") pod \"openshift-apiserver-operator-796bbdcf4f-g2qhz\" (UID: \"a701598c-dc6e-418a-ad5e-3a93b8bc7f02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.913561 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b1196114-7a7a-4f77-951a-20d10c32d0b2-images\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.915265 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a701598c-dc6e-418a-ad5e-3a93b8bc7f02-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-g2qhz\" (UID: \"a701598c-dc6e-418a-ad5e-3a93b8bc7f02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.916490 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.917030 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a124ef34-55e9-4974-95ba-4b55ac4ad5ce-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6b7h4\" (UID: \"a124ef34-55e9-4974-95ba-4b55ac4ad5ce\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.918616 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/b1196114-7a7a-4f77-951a-20d10c32d0b2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.922226 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.923358 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.925337 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.925459 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.925934 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.926457 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.927238 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.927545 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-26ppb"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.928190 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.929951 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.931669 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69c29801-25a3-49f6-a915-a92376892daf-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-zkxvv\" (UID: \"69c29801-25a3-49f6-a915-a92376892daf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.939752 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.941160 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.941897 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.944474 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-66mrd"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.945381 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.945762 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-krdwz"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.946634 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.946987 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.947457 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.950197 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.952110 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.954852 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vjs55"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.955287 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.955836 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.956088 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.956164 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.956888 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.962867 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.965002 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lwl94"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.967724 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.970969 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.973536 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.974958 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-952k5"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.978019 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4n677"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.978140 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-952k5" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.983070 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pzmdh"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.984121 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.985349 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-czfmv"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.986427 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8b44g"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.989107 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.990221 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.990405 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.991250 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-66mrd"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.992801 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7hm64"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.993857 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.995032 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.996245 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs"] Dec 11 09:56:17 crc kubenswrapper[4746]: I1211 09:56:17.998714 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.000839 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8ctft"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.002329 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-nh8tz"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.003441 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nh8tz" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.005139 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.007699 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nqbcs"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.009570 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.010274 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.011574 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013081 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-952k5"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013488 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/13c5b559-f24d-4e4e-a905-80a3da8dd577-auth-proxy-config\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013528 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b454ad-ff7c-4c7b-9d53-79b92b7520de-serving-cert\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013550 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-serving-cert\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013566 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-serving-cert\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013586 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-trusted-ca-bundle\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013600 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-etcd-serving-ca\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013615 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-serving-cert\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013637 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-oauth-config\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013659 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c5b559-f24d-4e4e-a905-80a3da8dd577-config\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013682 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksx79\" (UniqueName: \"kubernetes.io/projected/fcffdcae-c940-447e-8b52-e6ba9df066cd-kube-api-access-ksx79\") pod \"machine-config-controller-84d6567774-sxscv\" (UID: \"fcffdcae-c940-447e-8b52-e6ba9df066cd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013703 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcffdcae-c940-447e-8b52-e6ba9df066cd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sxscv\" (UID: \"fcffdcae-c940-447e-8b52-e6ba9df066cd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013724 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013740 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013756 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-client-ca\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013782 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda084f4-b624-4036-8da8-27d83af188ba-serving-cert\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013814 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcffdcae-c940-447e-8b52-e6ba9df066cd-proxy-tls\") pod \"machine-config-controller-84d6567774-sxscv\" (UID: \"fcffdcae-c940-447e-8b52-e6ba9df066cd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013846 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/13c5b559-f24d-4e4e-a905-80a3da8dd577-machine-approver-tls\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013873 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgq54\" (UniqueName: \"kubernetes.io/projected/2d6e68f4-a35b-43d1-b1fb-95600add4933-kube-api-access-rgq54\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013895 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013917 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97b454ad-ff7c-4c7b-9d53-79b92b7520de-audit-dir\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013952 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-config\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013968 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfwcs\" (UniqueName: \"kubernetes.io/projected/49f71235-96fa-4452-8805-d08461253a1f-kube-api-access-nfwcs\") pod \"olm-operator-6b444d44fb-mdgqr\" (UID: \"49f71235-96fa-4452-8805-d08461253a1f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.013986 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014005 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db7cc\" (UniqueName: \"kubernetes.io/projected/3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb-kube-api-access-db7cc\") pod \"control-plane-machine-set-operator-78cbb6b69f-4t7sz\" (UID: \"3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014033 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/97b454ad-ff7c-4c7b-9d53-79b92b7520de-node-pullsecrets\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014069 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/49f71235-96fa-4452-8805-d08461253a1f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mdgqr\" (UID: \"49f71235-96fa-4452-8805-d08461253a1f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014085 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014100 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014120 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-service-ca\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014124 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/13c5b559-f24d-4e4e-a905-80a3da8dd577-auth-proxy-config\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014134 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-config\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014151 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2rn8\" (UniqueName: \"kubernetes.io/projected/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-kube-api-access-s2rn8\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014171 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1fbd58c-6a34-456a-a1d6-854c25fb0d9f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-t5lpk\" (UID: \"d1fbd58c-6a34-456a-a1d6-854c25fb0d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014188 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-config\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014214 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014231 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4t7sz\" (UID: \"3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014250 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-service-ca-bundle\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014272 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66hfp\" (UniqueName: \"kubernetes.io/projected/97b454ad-ff7c-4c7b-9d53-79b92b7520de-kube-api-access-66hfp\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014287 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqql8\" (UniqueName: \"kubernetes.io/projected/6cae8380-85f7-4534-9bfc-46c5a3d6711f-kube-api-access-hqql8\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014305 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-image-import-ca\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014332 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/97b454ad-ff7c-4c7b-9d53-79b92b7520de-encryption-config\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014349 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014364 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-config\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014380 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-trusted-ca\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014397 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014416 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-policies\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014431 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014446 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f8jb\" (UniqueName: \"kubernetes.io/projected/13c5b559-f24d-4e4e-a905-80a3da8dd577-kube-api-access-2f8jb\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014513 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014531 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97b454ad-ff7c-4c7b-9d53-79b92b7520de-etcd-client\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014576 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2tg9\" (UniqueName: \"kubernetes.io/projected/d1fbd58c-6a34-456a-a1d6-854c25fb0d9f-kube-api-access-k2tg9\") pod \"multus-admission-controller-857f4d67dd-t5lpk\" (UID: \"d1fbd58c-6a34-456a-a1d6-854c25fb0d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014671 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/49f71235-96fa-4452-8805-d08461253a1f-srv-cert\") pod \"olm-operator-6b444d44fb-mdgqr\" (UID: \"49f71235-96fa-4452-8805-d08461253a1f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014694 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7gr5\" (UniqueName: \"kubernetes.io/projected/2895076f-4e51-4f1c-ae8b-e8e9d1b8888d-kube-api-access-b7gr5\") pod \"downloads-7954f5f757-nqbcs\" (UID: \"2895076f-4e51-4f1c-ae8b-e8e9d1b8888d\") " pod="openshift-console/downloads-7954f5f757-nqbcs" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014710 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flqmx\" (UniqueName: \"kubernetes.io/projected/fda084f4-b624-4036-8da8-27d83af188ba-kube-api-access-flqmx\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014724 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-audit\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014740 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcct5\" (UniqueName: \"kubernetes.io/projected/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-kube-api-access-pcct5\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014796 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-oauth-serving-cert\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014813 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-config\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014829 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014843 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-dir\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.014859 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.015739 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/13c5b559-f24d-4e4e-a905-80a3da8dd577-config\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.016819 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-etcd-serving-ca\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.017554 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-audit\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.017680 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.017832 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/13c5b559-f24d-4e4e-a905-80a3da8dd577-machine-approver-tls\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.018261 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.018543 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-oauth-serving-cert\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.018705 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-client-ca\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.019309 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fcffdcae-c940-447e-8b52-e6ba9df066cd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sxscv\" (UID: \"fcffdcae-c940-447e-8b52-e6ba9df066cd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.019312 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/97b454ad-ff7c-4c7b-9d53-79b92b7520de-etcd-client\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.019377 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bsdlm"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.019517 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-config\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.019738 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/97b454ad-ff7c-4c7b-9d53-79b92b7520de-serving-cert\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.020029 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.020064 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.020090 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-image-import-ca\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.020592 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-policies\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.020647 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.020703 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/97b454ad-ff7c-4c7b-9d53-79b92b7520de-audit-dir\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.021306 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fcffdcae-c940-447e-8b52-e6ba9df066cd-proxy-tls\") pod \"machine-config-controller-84d6567774-sxscv\" (UID: \"fcffdcae-c940-447e-8b52-e6ba9df066cd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.021655 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.021399 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-config\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.021721 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-dir\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.021762 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/97b454ad-ff7c-4c7b-9d53-79b92b7520de-node-pullsecrets\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.022326 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-config\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.022645 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-serving-cert\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.022954 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/49f71235-96fa-4452-8805-d08461253a1f-srv-cert\") pod \"olm-operator-6b444d44fb-mdgqr\" (UID: \"49f71235-96fa-4452-8805-d08461253a1f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.022978 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-service-ca\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.023142 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.023182 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-config\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.021323 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97b454ad-ff7c-4c7b-9d53-79b92b7520de-trusted-ca-bundle\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.023559 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/97b454ad-ff7c-4c7b-9d53-79b92b7520de-encryption-config\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.023919 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.023925 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-trusted-ca-bundle\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.024446 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-serving-cert\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.024505 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-service-ca-bundle\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.024943 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-config\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.025784 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.026277 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.026418 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-trusted-ca\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.026731 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1fbd58c-6a34-456a-a1d6-854c25fb0d9f-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-t5lpk\" (UID: \"d1fbd58c-6a34-456a-a1d6-854c25fb0d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.026795 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.026844 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-4t7sz\" (UID: \"3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.026869 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-serving-cert\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.027422 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/49f71235-96fa-4452-8805-d08461253a1f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-mdgqr\" (UID: \"49f71235-96fa-4452-8805-d08461253a1f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.027752 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.028015 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.028543 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-oauth-config\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.028567 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda084f4-b624-4036-8da8-27d83af188ba-serving-cert\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.030454 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-t5lpk"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.031975 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.033566 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-26ppb"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.034379 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.035110 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.036085 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.037226 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vjs55"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.038557 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.039742 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.041040 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9qmb2"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.042435 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nh8tz"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.043899 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.045060 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.047522 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-krdwz"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.047607 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.049035 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-s9pkn"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.049776 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.051423 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.051647 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m8w9j"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.052886 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m8w9j"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.052978 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.070148 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.090189 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.111711 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.132493 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.152412 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.176615 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.193033 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.210437 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.230430 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.251009 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.310486 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqcn4\" (UniqueName: \"kubernetes.io/projected/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-kube-api-access-hqcn4\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.325587 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/69c29801-25a3-49f6-a915-a92376892daf-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-zkxvv\" (UID: \"69c29801-25a3-49f6-a915-a92376892daf\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.330902 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.351307 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.369967 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.390567 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.390766 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.425716 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h85k\" (UniqueName: \"kubernetes.io/projected/b1196114-7a7a-4f77-951a-20d10c32d0b2-kube-api-access-7h85k\") pod \"machine-api-operator-5694c8668f-lwl94\" (UID: \"b1196114-7a7a-4f77-951a-20d10c32d0b2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.471576 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f9b1464c-2dc7-4c8c-82e6-0c72e106edf5-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wnp8m\" (UID: \"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.483327 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjzr4\" (UniqueName: \"kubernetes.io/projected/fce8473a-0f95-4788-af68-35b608885c41-kube-api-access-sjzr4\") pod \"openshift-config-operator-7777fb866f-8ctft\" (UID: \"fce8473a-0f95-4788-af68-35b608885c41\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.483993 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29gh4\" (UniqueName: \"kubernetes.io/projected/a124ef34-55e9-4974-95ba-4b55ac4ad5ce-kube-api-access-29gh4\") pod \"cluster-samples-operator-665b6dd947-6b7h4\" (UID: \"a124ef34-55e9-4974-95ba-4b55ac4ad5ce\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.497266 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.511009 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.511317 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj955\" (UniqueName: \"kubernetes.io/projected/a701598c-dc6e-418a-ad5e-3a93b8bc7f02-kube-api-access-bj955\") pod \"openshift-apiserver-operator-796bbdcf4f-g2qhz\" (UID: \"a701598c-dc6e-418a-ad5e-3a93b8bc7f02\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.532014 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.552269 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.576504 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.590879 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.611899 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.619536 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv"] Dec 11 09:56:18 crc kubenswrapper[4746]: W1211 09:56:18.627338 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69c29801_25a3_49f6_a915_a92376892daf.slice/crio-db02f8f39446b5174478605d2b05dfaf3bd0215d60487b79b1bbeb6226714a1e WatchSource:0}: Error finding container db02f8f39446b5174478605d2b05dfaf3bd0215d60487b79b1bbeb6226714a1e: Status 404 returned error can't find the container with id db02f8f39446b5174478605d2b05dfaf3bd0215d60487b79b1bbeb6226714a1e Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.630870 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.649589 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.656331 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.656369 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.666585 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.674156 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.690815 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.700136 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-8ctft"] Dec 11 09:56:18 crc kubenswrapper[4746]: W1211 09:56:18.706782 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfce8473a_0f95_4788_af68_35b608885c41.slice/crio-c287176a694411b7a546e97fd520ca179e3a0ebde70b0959ed2304ee7586dbb9 WatchSource:0}: Error finding container c287176a694411b7a546e97fd520ca179e3a0ebde70b0959ed2304ee7586dbb9: Status 404 returned error can't find the container with id c287176a694411b7a546e97fd520ca179e3a0ebde70b0959ed2304ee7586dbb9 Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.711149 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.730521 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.759388 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.763842 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.776725 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.795879 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.816488 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.830316 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.851262 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.871617 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.897889 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.910830 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.929203 4746 request.go:700] Waited for 1.000567219s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.931684 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-lwl94"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.932388 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 09:56:18 crc kubenswrapper[4746]: W1211 09:56:18.942144 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1196114_7a7a_4f77_951a_20d10c32d0b2.slice/crio-987087a7b13f50fa91f33d1726703adeb03f780ffb7ed7efecce15570d794754 WatchSource:0}: Error finding container 987087a7b13f50fa91f33d1726703adeb03f780ffb7ed7efecce15570d794754: Status 404 returned error can't find the container with id 987087a7b13f50fa91f33d1726703adeb03f780ffb7ed7efecce15570d794754 Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.951256 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.957220 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4"] Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.973199 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.992444 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 09:56:18 crc kubenswrapper[4746]: I1211 09:56:18.994812 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz"] Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.010491 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.020065 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m"] Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.030812 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.051641 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: W1211 09:56:19.066918 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9b1464c_2dc7_4c8c_82e6_0c72e106edf5.slice/crio-6a3f187966c59d0b69a43c930d4946f356673d25d955a312b0ac118f182cdb24 WatchSource:0}: Error finding container 6a3f187966c59d0b69a43c930d4946f356673d25d955a312b0ac118f182cdb24: Status 404 returned error can't find the container with id 6a3f187966c59d0b69a43c930d4946f356673d25d955a312b0ac118f182cdb24 Dec 11 09:56:19 crc kubenswrapper[4746]: W1211 09:56:19.068199 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda701598c_dc6e_418a_ad5e_3a93b8bc7f02.slice/crio-c3e3cf6675dc045dc11ef8dbd191104077384b5584e18cd9ac4c1b04df12ca37 WatchSource:0}: Error finding container c3e3cf6675dc045dc11ef8dbd191104077384b5584e18cd9ac4c1b04df12ca37: Status 404 returned error can't find the container with id c3e3cf6675dc045dc11ef8dbd191104077384b5584e18cd9ac4c1b04df12ca37 Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.090786 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.110820 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129178 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eeec20bb-7eb5-48b5-81c8-ba8ded6347e4-srv-cert\") pod \"catalog-operator-68c6474976-zlj7m\" (UID: \"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129242 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eeec20bb-7eb5-48b5-81c8-ba8ded6347e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-zlj7m\" (UID: \"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129269 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-trusted-ca\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129292 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9d880ef-9ac5-4686-bf49-77406ca35135-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129338 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z59cm\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-kube-api-access-z59cm\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129358 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8m7\" (UniqueName: \"kubernetes.io/projected/eeec20bb-7eb5-48b5-81c8-ba8ded6347e4-kube-api-access-tb8m7\") pod \"catalog-operator-68c6474976-zlj7m\" (UID: \"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129392 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9d880ef-9ac5-4686-bf49-77406ca35135-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129415 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-tls\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129454 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5eef2155-a824-4d97-a67e-d2c19aaecbd6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tx2q9\" (UID: \"5eef2155-a824-4d97-a67e-d2c19aaecbd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129476 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-certificates\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129496 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eef2155-a824-4d97-a67e-d2c19aaecbd6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tx2q9\" (UID: \"5eef2155-a824-4d97-a67e-d2c19aaecbd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129527 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eef2155-a824-4d97-a67e-d2c19aaecbd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tx2q9\" (UID: \"5eef2155-a824-4d97-a67e-d2c19aaecbd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129570 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.129592 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-bound-sa-token\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.131558 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.131679 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:19.631664088 +0000 UTC m=+152.491527391 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.150732 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.171530 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.191024 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.215717 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230490 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230638 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lw2q\" (UniqueName: \"kubernetes.io/projected/247537bc-cda2-416f-8040-ecd313916cf2-kube-api-access-8lw2q\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230667 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7q58\" (UniqueName: \"kubernetes.io/projected/ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e-kube-api-access-p7q58\") pod \"machine-config-server-s9pkn\" (UID: \"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e\") " pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230700 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-trusted-ca\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230735 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-plugins-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230750 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/247537bc-cda2-416f-8040-ecd313916cf2-apiservice-cert\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230767 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9d880ef-9ac5-4686-bf49-77406ca35135-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230786 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6890a664-38ec-4702-b9db-7bbc19fe5aae-default-certificate\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230811 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n4pc\" (UniqueName: \"kubernetes.io/projected/2e68686f-8a09-4477-8d31-3e1e762d06d3-kube-api-access-4n4pc\") pod \"service-ca-operator-777779d784-8c2w5\" (UID: \"2e68686f-8a09-4477-8d31-3e1e762d06d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230834 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce723dd2-6ea2-49d1-9faf-c92026630754-serving-cert\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230853 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmnf9\" (UniqueName: \"kubernetes.io/projected/75c281af-5b44-463c-9f6f-cb666090c7c6-kube-api-access-zmnf9\") pod \"ingress-canary-nh8tz\" (UID: \"75c281af-5b44-463c-9f6f-cb666090c7c6\") " pod="openshift-ingress-canary/ingress-canary-nh8tz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230867 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-registration-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230882 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbvg\" (UniqueName: \"kubernetes.io/projected/6890a664-38ec-4702-b9db-7bbc19fe5aae-kube-api-access-pdbvg\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230926 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/196232fe-f052-4cfb-8ecc-102b9411a95f-metrics-tls\") pod \"dns-default-952k5\" (UID: \"196232fe-f052-4cfb-8ecc-102b9411a95f\") " pod="openshift-dns/dns-default-952k5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230943 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b01d4aa-94e7-48da-86e7-6e5685e259b5-serving-cert\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230958 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6890a664-38ec-4702-b9db-7bbc19fe5aae-service-ca-bundle\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230973 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b01d4aa-94e7-48da-86e7-6e5685e259b5-encryption-config\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.230992 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6890a664-38ec-4702-b9db-7bbc19fe5aae-stats-auth\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231006 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a00f4c19-cef1-4b49-9e22-090e3cf5f2bd-metrics-tls\") pod \"dns-operator-744455d44c-7hm64\" (UID: \"a00f4c19-cef1-4b49-9e22-090e3cf5f2bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231042 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-tls\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231129 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e95e0ed3-6195-4925-abbb-116520ae5098-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231152 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/068ad235-6423-470d-9085-8e4536f18109-signing-cabundle\") pod \"service-ca-9c57cc56f-vjs55\" (UID: \"068ad235-6423-470d-9085-8e4536f18109\") " pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231185 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eef2155-a824-4d97-a67e-d2c19aaecbd6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tx2q9\" (UID: \"5eef2155-a824-4d97-a67e-d2c19aaecbd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231205 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5503b821-54ed-4fd8-a336-ea952f033321-config\") pod \"kube-controller-manager-operator-78b949d7b-s2jbs\" (UID: \"5503b821-54ed-4fd8-a336-ea952f033321\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231231 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/75c281af-5b44-463c-9f6f-cb666090c7c6-cert\") pod \"ingress-canary-nh8tz\" (UID: \"75c281af-5b44-463c-9f6f-cb666090c7c6\") " pod="openshift-ingress-canary/ingress-canary-nh8tz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231250 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e68686f-8a09-4477-8d31-3e1e762d06d3-config\") pod \"service-ca-operator-777779d784-8c2w5\" (UID: \"2e68686f-8a09-4477-8d31-3e1e762d06d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231270 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c498d893-da54-4a88-9314-93fdfaaf130d-serving-cert\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231303 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62dj6\" (UniqueName: \"kubernetes.io/projected/aee37070-f695-4b53-aaac-c41b538f28f6-kube-api-access-62dj6\") pod \"openshift-controller-manager-operator-756b6f6bc6-tsvvn\" (UID: \"aee37070-f695-4b53-aaac-c41b538f28f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231360 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t45tb\" (UniqueName: \"kubernetes.io/projected/9a59886a-3dad-4b64-b432-db95667e0bdc-kube-api-access-t45tb\") pod \"kube-storage-version-migrator-operator-b67b599dd-ntcrz\" (UID: \"9a59886a-3dad-4b64-b432-db95667e0bdc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231381 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c498d893-da54-4a88-9314-93fdfaaf130d-etcd-client\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231401 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-config-volume\") pod \"collect-profiles-29424105-lrv4h\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231407 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231422 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5503b821-54ed-4fd8-a336-ea952f033321-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s2jbs\" (UID: \"5503b821-54ed-4fd8-a336-ea952f033321\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231459 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/068ad235-6423-470d-9085-8e4536f18109-signing-key\") pod \"service-ca-9c57cc56f-vjs55\" (UID: \"068ad235-6423-470d-9085-8e4536f18109\") " pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231477 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9sgh\" (UniqueName: \"kubernetes.io/projected/4b01d4aa-94e7-48da-86e7-6e5685e259b5-kube-api-access-c9sgh\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231517 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-bound-sa-token\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231533 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fczdw\" (UniqueName: \"kubernetes.io/projected/ce723dd2-6ea2-49d1-9faf-c92026630754-kube-api-access-fczdw\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231549 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-mountpoint-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231566 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b01d4aa-94e7-48da-86e7-6e5685e259b5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.231615 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b01d4aa-94e7-48da-86e7-6e5685e259b5-audit-policies\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.232507 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtf6w\" (UniqueName: \"kubernetes.io/projected/9dd7dcf7-5174-44fb-b164-38de3c8788ad-kube-api-access-xtf6w\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.232539 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6fwv\" (UniqueName: \"kubernetes.io/projected/0178df62-7497-43fc-b639-3f44170cff1c-kube-api-access-x6fwv\") pod \"package-server-manager-789f6589d5-dhjm8\" (UID: \"0178df62-7497-43fc-b639-3f44170cff1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.232557 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-secret-volume\") pod \"collect-profiles-29424105-lrv4h\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.232578 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpcdw\" (UniqueName: \"kubernetes.io/projected/a00f4c19-cef1-4b49-9e22-090e3cf5f2bd-kube-api-access-jpcdw\") pod \"dns-operator-744455d44c-7hm64\" (UID: \"a00f4c19-cef1-4b49-9e22-090e3cf5f2bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.232594 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfzv7\" (UniqueName: \"kubernetes.io/projected/196232fe-f052-4cfb-8ecc-102b9411a95f-kube-api-access-mfzv7\") pod \"dns-default-952k5\" (UID: \"196232fe-f052-4cfb-8ecc-102b9411a95f\") " pod="openshift-dns/dns-default-952k5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.232639 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eeec20bb-7eb5-48b5-81c8-ba8ded6347e4-srv-cert\") pod \"catalog-operator-68c6474976-zlj7m\" (UID: \"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.232682 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eeec20bb-7eb5-48b5-81c8-ba8ded6347e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-zlj7m\" (UID: \"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.232744 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-krdwz\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.232785 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a59886a-3dad-4b64-b432-db95667e0bdc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ntcrz\" (UID: \"9a59886a-3dad-4b64-b432-db95667e0bdc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.232808 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5503b821-54ed-4fd8-a336-ea952f033321-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s2jbs\" (UID: \"5503b821-54ed-4fd8-a336-ea952f033321\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.232866 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:19.732843343 +0000 UTC m=+152.592706716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.233281 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9d880ef-9ac5-4686-bf49-77406ca35135-ca-trust-extracted\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.233599 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0178df62-7497-43fc-b639-3f44170cff1c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dhjm8\" (UID: \"0178df62-7497-43fc-b639-3f44170cff1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.233626 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee37070-f695-4b53-aaac-c41b538f28f6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tsvvn\" (UID: \"aee37070-f695-4b53-aaac-c41b538f28f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.233676 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-client-ca\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.233713 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz6gl\" (UniqueName: \"kubernetes.io/projected/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-kube-api-access-zz6gl\") pod \"collect-profiles-29424105-lrv4h\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.233732 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-csi-data-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.233753 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6890a664-38ec-4702-b9db-7bbc19fe5aae-metrics-certs\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.233776 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4phs\" (UniqueName: \"kubernetes.io/projected/068ad235-6423-470d-9085-8e4536f18109-kube-api-access-x4phs\") pod \"service-ca-9c57cc56f-vjs55\" (UID: \"068ad235-6423-470d-9085-8e4536f18109\") " pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.233809 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z59cm\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-kube-api-access-z59cm\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.233983 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5eef2155-a824-4d97-a67e-d2c19aaecbd6-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tx2q9\" (UID: \"5eef2155-a824-4d97-a67e-d2c19aaecbd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234078 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-trusted-ca\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234134 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8m7\" (UniqueName: \"kubernetes.io/projected/eeec20bb-7eb5-48b5-81c8-ba8ded6347e4-kube-api-access-tb8m7\") pod \"catalog-operator-68c6474976-zlj7m\" (UID: \"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234201 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/247537bc-cda2-416f-8040-ecd313916cf2-webhook-cert\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234231 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e-certs\") pod \"machine-config-server-s9pkn\" (UID: \"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e\") " pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234343 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftpgw\" (UniqueName: \"kubernetes.io/projected/847c9b9b-d231-4694-8229-0730ba158052-kube-api-access-ftpgw\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234407 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/847c9b9b-d231-4694-8229-0730ba158052-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234461 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b01d4aa-94e7-48da-86e7-6e5685e259b5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234522 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e95e0ed3-6195-4925-abbb-116520ae5098-images\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234567 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8hdj\" (UniqueName: \"kubernetes.io/projected/e95e0ed3-6195-4925-abbb-116520ae5098-kube-api-access-n8hdj\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234720 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9d880ef-9ac5-4686-bf49-77406ca35135-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234841 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a59886a-3dad-4b64-b432-db95667e0bdc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ntcrz\" (UID: \"9a59886a-3dad-4b64-b432-db95667e0bdc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234870 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c498d893-da54-4a88-9314-93fdfaaf130d-etcd-service-ca\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.234900 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5eef2155-a824-4d97-a67e-d2c19aaecbd6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tx2q9\" (UID: \"5eef2155-a824-4d97-a67e-d2c19aaecbd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.235010 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e-node-bootstrap-token\") pod \"machine-config-server-s9pkn\" (UID: \"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e\") " pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.235109 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-certificates\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.235251 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/196232fe-f052-4cfb-8ecc-102b9411a95f-config-volume\") pod \"dns-default-952k5\" (UID: \"196232fe-f052-4cfb-8ecc-102b9411a95f\") " pod="openshift-dns/dns-default-952k5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.235280 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9b68\" (UniqueName: \"kubernetes.io/projected/26ce87dd-769c-424f-8865-6e2c8d81ee39-kube-api-access-d9b68\") pod \"migrator-59844c95c7-sf4lq\" (UID: \"26ce87dd-769c-424f-8865-6e2c8d81ee39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.235347 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee37070-f695-4b53-aaac-c41b538f28f6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tsvvn\" (UID: \"aee37070-f695-4b53-aaac-c41b538f28f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.235373 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b01d4aa-94e7-48da-86e7-6e5685e259b5-audit-dir\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.235504 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c498d893-da54-4a88-9314-93fdfaaf130d-etcd-ca\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.235532 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgfcd\" (UniqueName: \"kubernetes.io/projected/c498d893-da54-4a88-9314-93fdfaaf130d-kube-api-access-wgfcd\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.235558 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eef2155-a824-4d97-a67e-d2c19aaecbd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tx2q9\" (UID: \"5eef2155-a824-4d97-a67e-d2c19aaecbd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.235935 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e95e0ed3-6195-4925-abbb-116520ae5098-proxy-tls\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236216 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmxmz\" (UniqueName: \"kubernetes.io/projected/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-kube-api-access-gmxmz\") pod \"marketplace-operator-79b997595-krdwz\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236241 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-certificates\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236248 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-config\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236308 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c498d893-da54-4a88-9314-93fdfaaf130d-config\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236347 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/847c9b9b-d231-4694-8229-0730ba158052-trusted-ca\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236371 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/247537bc-cda2-416f-8040-ecd313916cf2-tmpfs\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236395 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b01d4aa-94e7-48da-86e7-6e5685e259b5-etcd-client\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236461 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/847c9b9b-d231-4694-8229-0730ba158052-metrics-tls\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236490 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e68686f-8a09-4477-8d31-3e1e762d06d3-serving-cert\") pod \"service-ca-operator-777779d784-8c2w5\" (UID: \"2e68686f-8a09-4477-8d31-3e1e762d06d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236522 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236678 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-socket-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.236836 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-krdwz\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.238428 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9d880ef-9ac5-4686-bf49-77406ca35135-installation-pull-secrets\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.238669 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5eef2155-a824-4d97-a67e-d2c19aaecbd6-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tx2q9\" (UID: \"5eef2155-a824-4d97-a67e-d2c19aaecbd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.239444 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/eeec20bb-7eb5-48b5-81c8-ba8ded6347e4-profile-collector-cert\") pod \"catalog-operator-68c6474976-zlj7m\" (UID: \"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.239525 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-tls\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.240039 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/eeec20bb-7eb5-48b5-81c8-ba8ded6347e4-srv-cert\") pod \"catalog-operator-68c6474976-zlj7m\" (UID: \"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.251862 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.271205 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.293007 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.313032 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.318081 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" event={"ID":"b1196114-7a7a-4f77-951a-20d10c32d0b2","Type":"ContainerStarted","Data":"58bc642fbcc810626d9eab9401ec17e4a909b227b9bf12ef193f74541b6339c4"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.318128 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" event={"ID":"b1196114-7a7a-4f77-951a-20d10c32d0b2","Type":"ContainerStarted","Data":"b1a7c594db922d5b6363217db5c3e74dd7b5b9c6b74e614b7b9803ba476bccf1"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.318141 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" event={"ID":"b1196114-7a7a-4f77-951a-20d10c32d0b2","Type":"ContainerStarted","Data":"987087a7b13f50fa91f33d1726703adeb03f780ffb7ed7efecce15570d794754"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.320762 4746 generic.go:334] "Generic (PLEG): container finished" podID="fce8473a-0f95-4788-af68-35b608885c41" containerID="f29b35aa19d35591980dec4547cdd94ac24663d59f8b5ba6b80a3d01bf346215" exitCode=0 Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.320814 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" event={"ID":"fce8473a-0f95-4788-af68-35b608885c41","Type":"ContainerDied","Data":"f29b35aa19d35591980dec4547cdd94ac24663d59f8b5ba6b80a3d01bf346215"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.320832 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" event={"ID":"fce8473a-0f95-4788-af68-35b608885c41","Type":"ContainerStarted","Data":"c287176a694411b7a546e97fd520ca179e3a0ebde70b0959ed2304ee7586dbb9"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.325164 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" event={"ID":"a701598c-dc6e-418a-ad5e-3a93b8bc7f02","Type":"ContainerStarted","Data":"c56db18e9577bfa8cc994de7f649ceb3826580d47362a4f7861d90b97b7be5f5"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.325191 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" event={"ID":"a701598c-dc6e-418a-ad5e-3a93b8bc7f02","Type":"ContainerStarted","Data":"c3e3cf6675dc045dc11ef8dbd191104077384b5584e18cd9ac4c1b04df12ca37"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.327884 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" event={"ID":"a124ef34-55e9-4974-95ba-4b55ac4ad5ce","Type":"ContainerStarted","Data":"1f90f5c593357bb6c5bce2797aaa8af0d607ab8e2b57e77d047dcc9047de1437"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.327956 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" event={"ID":"a124ef34-55e9-4974-95ba-4b55ac4ad5ce","Type":"ContainerStarted","Data":"6f9f74add8ab239cc01d32d59aca09d4348d439c6457de0d4cce22350a7980ce"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.329358 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" event={"ID":"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5","Type":"ContainerStarted","Data":"c7b936ae7470d4bd20021b5a20b47425f8af4ffe738f8b37dfa4a9ab84192218"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.329393 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" event={"ID":"f9b1464c-2dc7-4c8c-82e6-0c72e106edf5","Type":"ContainerStarted","Data":"6a3f187966c59d0b69a43c930d4946f356673d25d955a312b0ac118f182cdb24"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.330391 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.330647 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" event={"ID":"69c29801-25a3-49f6-a915-a92376892daf","Type":"ContainerStarted","Data":"e274bb6ff86bd5495fd1feaa5e6e894a03a93bb0cd08cd399aceee76fff66cca"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.330700 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" event={"ID":"69c29801-25a3-49f6-a915-a92376892daf","Type":"ContainerStarted","Data":"db02f8f39446b5174478605d2b05dfaf3bd0215d60487b79b1bbeb6226714a1e"} Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338356 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-plugins-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338387 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/247537bc-cda2-416f-8040-ecd313916cf2-apiservice-cert\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338406 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6890a664-38ec-4702-b9db-7bbc19fe5aae-default-certificate\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338425 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n4pc\" (UniqueName: \"kubernetes.io/projected/2e68686f-8a09-4477-8d31-3e1e762d06d3-kube-api-access-4n4pc\") pod \"service-ca-operator-777779d784-8c2w5\" (UID: \"2e68686f-8a09-4477-8d31-3e1e762d06d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338442 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce723dd2-6ea2-49d1-9faf-c92026630754-serving-cert\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338462 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmnf9\" (UniqueName: \"kubernetes.io/projected/75c281af-5b44-463c-9f6f-cb666090c7c6-kube-api-access-zmnf9\") pod \"ingress-canary-nh8tz\" (UID: \"75c281af-5b44-463c-9f6f-cb666090c7c6\") " pod="openshift-ingress-canary/ingress-canary-nh8tz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338477 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-registration-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338491 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdbvg\" (UniqueName: \"kubernetes.io/projected/6890a664-38ec-4702-b9db-7bbc19fe5aae-kube-api-access-pdbvg\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338509 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/196232fe-f052-4cfb-8ecc-102b9411a95f-metrics-tls\") pod \"dns-default-952k5\" (UID: \"196232fe-f052-4cfb-8ecc-102b9411a95f\") " pod="openshift-dns/dns-default-952k5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338524 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b01d4aa-94e7-48da-86e7-6e5685e259b5-serving-cert\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338539 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6890a664-38ec-4702-b9db-7bbc19fe5aae-service-ca-bundle\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338554 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b01d4aa-94e7-48da-86e7-6e5685e259b5-encryption-config\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338575 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6890a664-38ec-4702-b9db-7bbc19fe5aae-stats-auth\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338593 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a00f4c19-cef1-4b49-9e22-090e3cf5f2bd-metrics-tls\") pod \"dns-operator-744455d44c-7hm64\" (UID: \"a00f4c19-cef1-4b49-9e22-090e3cf5f2bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338638 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e95e0ed3-6195-4925-abbb-116520ae5098-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338657 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/068ad235-6423-470d-9085-8e4536f18109-signing-cabundle\") pod \"service-ca-9c57cc56f-vjs55\" (UID: \"068ad235-6423-470d-9085-8e4536f18109\") " pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338675 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5503b821-54ed-4fd8-a336-ea952f033321-config\") pod \"kube-controller-manager-operator-78b949d7b-s2jbs\" (UID: \"5503b821-54ed-4fd8-a336-ea952f033321\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338695 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/75c281af-5b44-463c-9f6f-cb666090c7c6-cert\") pod \"ingress-canary-nh8tz\" (UID: \"75c281af-5b44-463c-9f6f-cb666090c7c6\") " pod="openshift-ingress-canary/ingress-canary-nh8tz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338710 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e68686f-8a09-4477-8d31-3e1e762d06d3-config\") pod \"service-ca-operator-777779d784-8c2w5\" (UID: \"2e68686f-8a09-4477-8d31-3e1e762d06d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338726 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c498d893-da54-4a88-9314-93fdfaaf130d-serving-cert\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338741 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62dj6\" (UniqueName: \"kubernetes.io/projected/aee37070-f695-4b53-aaac-c41b538f28f6-kube-api-access-62dj6\") pod \"openshift-controller-manager-operator-756b6f6bc6-tsvvn\" (UID: \"aee37070-f695-4b53-aaac-c41b538f28f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338758 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t45tb\" (UniqueName: \"kubernetes.io/projected/9a59886a-3dad-4b64-b432-db95667e0bdc-kube-api-access-t45tb\") pod \"kube-storage-version-migrator-operator-b67b599dd-ntcrz\" (UID: \"9a59886a-3dad-4b64-b432-db95667e0bdc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338739 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-registration-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338772 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c498d893-da54-4a88-9314-93fdfaaf130d-etcd-client\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338851 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-config-volume\") pod \"collect-profiles-29424105-lrv4h\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338879 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5503b821-54ed-4fd8-a336-ea952f033321-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s2jbs\" (UID: \"5503b821-54ed-4fd8-a336-ea952f033321\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338923 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/068ad235-6423-470d-9085-8e4536f18109-signing-key\") pod \"service-ca-9c57cc56f-vjs55\" (UID: \"068ad235-6423-470d-9085-8e4536f18109\") " pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338961 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9sgh\" (UniqueName: \"kubernetes.io/projected/4b01d4aa-94e7-48da-86e7-6e5685e259b5-kube-api-access-c9sgh\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.338995 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339031 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fczdw\" (UniqueName: \"kubernetes.io/projected/ce723dd2-6ea2-49d1-9faf-c92026630754-kube-api-access-fczdw\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339087 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-mountpoint-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339110 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b01d4aa-94e7-48da-86e7-6e5685e259b5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339135 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b01d4aa-94e7-48da-86e7-6e5685e259b5-audit-policies\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339176 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtf6w\" (UniqueName: \"kubernetes.io/projected/9dd7dcf7-5174-44fb-b164-38de3c8788ad-kube-api-access-xtf6w\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339200 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6fwv\" (UniqueName: \"kubernetes.io/projected/0178df62-7497-43fc-b639-3f44170cff1c-kube-api-access-x6fwv\") pod \"package-server-manager-789f6589d5-dhjm8\" (UID: \"0178df62-7497-43fc-b639-3f44170cff1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339223 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-secret-volume\") pod \"collect-profiles-29424105-lrv4h\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339249 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpcdw\" (UniqueName: \"kubernetes.io/projected/a00f4c19-cef1-4b49-9e22-090e3cf5f2bd-kube-api-access-jpcdw\") pod \"dns-operator-744455d44c-7hm64\" (UID: \"a00f4c19-cef1-4b49-9e22-090e3cf5f2bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339275 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfzv7\" (UniqueName: \"kubernetes.io/projected/196232fe-f052-4cfb-8ecc-102b9411a95f-kube-api-access-mfzv7\") pod \"dns-default-952k5\" (UID: \"196232fe-f052-4cfb-8ecc-102b9411a95f\") " pod="openshift-dns/dns-default-952k5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339319 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-krdwz\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339348 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a59886a-3dad-4b64-b432-db95667e0bdc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ntcrz\" (UID: \"9a59886a-3dad-4b64-b432-db95667e0bdc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339371 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5503b821-54ed-4fd8-a336-ea952f033321-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s2jbs\" (UID: \"5503b821-54ed-4fd8-a336-ea952f033321\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339396 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0178df62-7497-43fc-b639-3f44170cff1c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dhjm8\" (UID: \"0178df62-7497-43fc-b639-3f44170cff1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339423 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee37070-f695-4b53-aaac-c41b538f28f6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tsvvn\" (UID: \"aee37070-f695-4b53-aaac-c41b538f28f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339452 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-client-ca\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339485 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz6gl\" (UniqueName: \"kubernetes.io/projected/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-kube-api-access-zz6gl\") pod \"collect-profiles-29424105-lrv4h\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339509 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-csi-data-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339531 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6890a664-38ec-4702-b9db-7bbc19fe5aae-metrics-certs\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339598 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4phs\" (UniqueName: \"kubernetes.io/projected/068ad235-6423-470d-9085-8e4536f18109-kube-api-access-x4phs\") pod \"service-ca-9c57cc56f-vjs55\" (UID: \"068ad235-6423-470d-9085-8e4536f18109\") " pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339689 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/247537bc-cda2-416f-8040-ecd313916cf2-webhook-cert\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339717 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e-certs\") pod \"machine-config-server-s9pkn\" (UID: \"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e\") " pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339771 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftpgw\" (UniqueName: \"kubernetes.io/projected/847c9b9b-d231-4694-8229-0730ba158052-kube-api-access-ftpgw\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339796 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/847c9b9b-d231-4694-8229-0730ba158052-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339842 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b01d4aa-94e7-48da-86e7-6e5685e259b5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339871 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e95e0ed3-6195-4925-abbb-116520ae5098-images\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339971 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8hdj\" (UniqueName: \"kubernetes.io/projected/e95e0ed3-6195-4925-abbb-116520ae5098-kube-api-access-n8hdj\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340010 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6890a664-38ec-4702-b9db-7bbc19fe5aae-service-ca-bundle\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340028 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a59886a-3dad-4b64-b432-db95667e0bdc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ntcrz\" (UID: \"9a59886a-3dad-4b64-b432-db95667e0bdc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340086 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c498d893-da54-4a88-9314-93fdfaaf130d-etcd-service-ca\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340117 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e-node-bootstrap-token\") pod \"machine-config-server-s9pkn\" (UID: \"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e\") " pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340177 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/196232fe-f052-4cfb-8ecc-102b9411a95f-config-volume\") pod \"dns-default-952k5\" (UID: \"196232fe-f052-4cfb-8ecc-102b9411a95f\") " pod="openshift-dns/dns-default-952k5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340204 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9b68\" (UniqueName: \"kubernetes.io/projected/26ce87dd-769c-424f-8865-6e2c8d81ee39-kube-api-access-d9b68\") pod \"migrator-59844c95c7-sf4lq\" (UID: \"26ce87dd-769c-424f-8865-6e2c8d81ee39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340271 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee37070-f695-4b53-aaac-c41b538f28f6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tsvvn\" (UID: \"aee37070-f695-4b53-aaac-c41b538f28f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340318 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b01d4aa-94e7-48da-86e7-6e5685e259b5-audit-dir\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340344 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c498d893-da54-4a88-9314-93fdfaaf130d-etcd-ca\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340366 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgfcd\" (UniqueName: \"kubernetes.io/projected/c498d893-da54-4a88-9314-93fdfaaf130d-kube-api-access-wgfcd\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340428 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e95e0ed3-6195-4925-abbb-116520ae5098-proxy-tls\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340492 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmxmz\" (UniqueName: \"kubernetes.io/projected/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-kube-api-access-gmxmz\") pod \"marketplace-operator-79b997595-krdwz\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340521 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-config\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340580 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c498d893-da54-4a88-9314-93fdfaaf130d-config\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340606 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/847c9b9b-d231-4694-8229-0730ba158052-trusted-ca\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340652 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/247537bc-cda2-416f-8040-ecd313916cf2-tmpfs\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340676 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b01d4aa-94e7-48da-86e7-6e5685e259b5-etcd-client\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340701 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/847c9b9b-d231-4694-8229-0730ba158052-metrics-tls\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340748 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e68686f-8a09-4477-8d31-3e1e762d06d3-serving-cert\") pod \"service-ca-operator-777779d784-8c2w5\" (UID: \"2e68686f-8a09-4477-8d31-3e1e762d06d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340772 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340848 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-socket-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340901 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-krdwz\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340932 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lw2q\" (UniqueName: \"kubernetes.io/projected/247537bc-cda2-416f-8040-ecd313916cf2-kube-api-access-8lw2q\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340981 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7q58\" (UniqueName: \"kubernetes.io/projected/ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e-kube-api-access-p7q58\") pod \"machine-config-server-s9pkn\" (UID: \"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e\") " pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.341256 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9a59886a-3dad-4b64-b432-db95667e0bdc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-ntcrz\" (UID: \"9a59886a-3dad-4b64-b432-db95667e0bdc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.341459 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4b01d4aa-94e7-48da-86e7-6e5685e259b5-audit-policies\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.341671 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-csi-data-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.341688 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4b01d4aa-94e7-48da-86e7-6e5685e259b5-audit-dir\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.341958 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:19.84194366 +0000 UTC m=+152.701807053 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.342583 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e68686f-8a09-4477-8d31-3e1e762d06d3-config\") pod \"service-ca-operator-777779d784-8c2w5\" (UID: \"2e68686f-8a09-4477-8d31-3e1e762d06d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.343076 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-mountpoint-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.343148 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b01d4aa-94e7-48da-86e7-6e5685e259b5-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.343617 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4b01d4aa-94e7-48da-86e7-6e5685e259b5-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.343856 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-plugins-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.339685 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e95e0ed3-6195-4925-abbb-116520ae5098-auth-proxy-config\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.344497 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/247537bc-cda2-416f-8040-ecd313916cf2-tmpfs\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.344849 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c498d893-da54-4a88-9314-93fdfaaf130d-config\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.344897 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce723dd2-6ea2-49d1-9faf-c92026630754-serving-cert\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.344962 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/247537bc-cda2-416f-8040-ecd313916cf2-apiservice-cert\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.345003 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-config\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.345714 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-client-ca\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.345848 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-krdwz\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.345925 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4b01d4aa-94e7-48da-86e7-6e5685e259b5-serving-cert\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.345999 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/247537bc-cda2-416f-8040-ecd313916cf2-webhook-cert\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.340370 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5503b821-54ed-4fd8-a336-ea952f033321-config\") pod \"kube-controller-manager-operator-78b949d7b-s2jbs\" (UID: \"5503b821-54ed-4fd8-a336-ea952f033321\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.346206 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a00f4c19-cef1-4b49-9e22-090e3cf5f2bd-metrics-tls\") pod \"dns-operator-744455d44c-7hm64\" (UID: \"a00f4c19-cef1-4b49-9e22-090e3cf5f2bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.346288 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9dd7dcf7-5174-44fb-b164-38de3c8788ad-socket-dir\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.346854 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e95e0ed3-6195-4925-abbb-116520ae5098-images\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.347425 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/c498d893-da54-4a88-9314-93fdfaaf130d-etcd-service-ca\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.347737 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.348380 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4b01d4aa-94e7-48da-86e7-6e5685e259b5-etcd-client\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.348653 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0178df62-7497-43fc-b639-3f44170cff1c-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-dhjm8\" (UID: \"0178df62-7497-43fc-b639-3f44170cff1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.348823 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6890a664-38ec-4702-b9db-7bbc19fe5aae-default-certificate\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.348829 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c498d893-da54-4a88-9314-93fdfaaf130d-etcd-client\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.349015 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/c498d893-da54-4a88-9314-93fdfaaf130d-etcd-ca\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.349265 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4b01d4aa-94e7-48da-86e7-6e5685e259b5-encryption-config\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.350932 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6890a664-38ec-4702-b9db-7bbc19fe5aae-stats-auth\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.351404 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5503b821-54ed-4fd8-a336-ea952f033321-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-s2jbs\" (UID: \"5503b821-54ed-4fd8-a336-ea952f033321\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.351616 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e68686f-8a09-4477-8d31-3e1e762d06d3-serving-cert\") pod \"service-ca-operator-777779d784-8c2w5\" (UID: \"2e68686f-8a09-4477-8d31-3e1e762d06d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.352117 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.352707 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9a59886a-3dad-4b64-b432-db95667e0bdc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-ntcrz\" (UID: \"9a59886a-3dad-4b64-b432-db95667e0bdc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.353794 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6890a664-38ec-4702-b9db-7bbc19fe5aae-metrics-certs\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.354292 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-secret-volume\") pod \"collect-profiles-29424105-lrv4h\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.354595 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e95e0ed3-6195-4925-abbb-116520ae5098-proxy-tls\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.355166 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-krdwz\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.356491 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c498d893-da54-4a88-9314-93fdfaaf130d-serving-cert\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.370860 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.410326 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.412033 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.429753 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/847c9b9b-d231-4694-8229-0730ba158052-metrics-tls\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.430387 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.441472 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.442194 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:19.942170038 +0000 UTC m=+152.802033351 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.442361 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.443328 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:19.943314159 +0000 UTC m=+152.803177552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.443728 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/068ad235-6423-470d-9085-8e4536f18109-signing-key\") pod \"service-ca-9c57cc56f-vjs55\" (UID: \"068ad235-6423-470d-9085-8e4536f18109\") " pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.450425 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.471300 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.498096 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.503627 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/847c9b9b-d231-4694-8229-0730ba158052-trusted-ca\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.510881 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.533915 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.539203 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-config-volume\") pod \"collect-profiles-29424105-lrv4h\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.543640 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.544467 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.044448151 +0000 UTC m=+152.904311464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.550863 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.560882 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/068ad235-6423-470d-9085-8e4536f18109-signing-cabundle\") pod \"service-ca-9c57cc56f-vjs55\" (UID: \"068ad235-6423-470d-9085-8e4536f18109\") " pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.571247 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.591772 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.619525 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.631652 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aee37070-f695-4b53-aaac-c41b538f28f6-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-tsvvn\" (UID: \"aee37070-f695-4b53-aaac-c41b538f28f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.633166 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.645549 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.645926 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.145911564 +0000 UTC m=+153.005774877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.651187 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.652772 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aee37070-f695-4b53-aaac-c41b538f28f6-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-tsvvn\" (UID: \"aee37070-f695-4b53-aaac-c41b538f28f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.671607 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.690580 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.710504 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.717499 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/196232fe-f052-4cfb-8ecc-102b9411a95f-config-volume\") pod \"dns-default-952k5\" (UID: \"196232fe-f052-4cfb-8ecc-102b9411a95f\") " pod="openshift-dns/dns-default-952k5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.730381 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.742789 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/196232fe-f052-4cfb-8ecc-102b9411a95f-metrics-tls\") pod \"dns-default-952k5\" (UID: \"196232fe-f052-4cfb-8ecc-102b9411a95f\") " pod="openshift-dns/dns-default-952k5" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.746420 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.746535 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.246514392 +0000 UTC m=+153.106377725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.746683 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.747262 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.247250762 +0000 UTC m=+153.107114075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.753715 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.775708 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.793229 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.803138 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/75c281af-5b44-463c-9f6f-cb666090c7c6-cert\") pod \"ingress-canary-nh8tz\" (UID: \"75c281af-5b44-463c-9f6f-cb666090c7c6\") " pod="openshift-ingress-canary/ingress-canary-nh8tz" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.813979 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.831408 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.848220 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.848414 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.348392194 +0000 UTC m=+153.208255507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.848699 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.849005 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.348993341 +0000 UTC m=+153.208856654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.879687 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqql8\" (UniqueName: \"kubernetes.io/projected/6cae8380-85f7-4534-9bfc-46c5a3d6711f-kube-api-access-hqql8\") pod \"oauth-openshift-558db77b4-czfmv\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.946905 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcct5\" (UniqueName: \"kubernetes.io/projected/1362f4c2-fffd-4e0f-9a2c-07fe8666d6be-kube-api-access-pcct5\") pod \"console-operator-58897d9998-bsdlm\" (UID: \"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be\") " pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.947011 4746 request.go:700] Waited for 1.870978422s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.949299 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.949801 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.449770494 +0000 UTC m=+153.309633867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.950219 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:19 crc kubenswrapper[4746]: E1211 09:56:19.951938 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.451924142 +0000 UTC m=+153.311787455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.965122 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksx79\" (UniqueName: \"kubernetes.io/projected/fcffdcae-c940-447e-8b52-e6ba9df066cd-kube-api-access-ksx79\") pod \"machine-config-controller-84d6567774-sxscv\" (UID: \"fcffdcae-c940-447e-8b52-e6ba9df066cd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.965153 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2tg9\" (UniqueName: \"kubernetes.io/projected/d1fbd58c-6a34-456a-a1d6-854c25fb0d9f-kube-api-access-k2tg9\") pod \"multus-admission-controller-857f4d67dd-t5lpk\" (UID: \"d1fbd58c-6a34-456a-a1d6-854c25fb0d9f\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.965254 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f8jb\" (UniqueName: \"kubernetes.io/projected/13c5b559-f24d-4e4e-a905-80a3da8dd577-kube-api-access-2f8jb\") pod \"machine-approver-56656f9798-s5g5g\" (UID: \"13c5b559-f24d-4e4e-a905-80a3da8dd577\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.970576 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgq54\" (UniqueName: \"kubernetes.io/projected/2d6e68f4-a35b-43d1-b1fb-95600add4933-kube-api-access-rgq54\") pod \"console-f9d7485db-4n677\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.984742 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfwcs\" (UniqueName: \"kubernetes.io/projected/49f71235-96fa-4452-8805-d08461253a1f-kube-api-access-nfwcs\") pod \"olm-operator-6b444d44fb-mdgqr\" (UID: \"49f71235-96fa-4452-8805-d08461253a1f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:19 crc kubenswrapper[4746]: I1211 09:56:19.987905 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" Dec 11 09:56:20 crc kubenswrapper[4746]: W1211 09:56:20.003200 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13c5b559_f24d_4e4e_a905_80a3da8dd577.slice/crio-48da84c93e320f94b923e4d30999a30c0a5e46c31f3d1daa71f9e6075615434b WatchSource:0}: Error finding container 48da84c93e320f94b923e4d30999a30c0a5e46c31f3d1daa71f9e6075615434b: Status 404 returned error can't find the container with id 48da84c93e320f94b923e4d30999a30c0a5e46c31f3d1daa71f9e6075615434b Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.006390 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.074371 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db7cc\" (UniqueName: \"kubernetes.io/projected/3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb-kube-api-access-db7cc\") pod \"control-plane-machine-set-operator-78cbb6b69f-4t7sz\" (UID: \"3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.074625 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.074820 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7gr5\" (UniqueName: \"kubernetes.io/projected/2895076f-4e51-4f1c-ae8b-e8e9d1b8888d-kube-api-access-b7gr5\") pod \"downloads-7954f5f757-nqbcs\" (UID: \"2895076f-4e51-4f1c-ae8b-e8e9d1b8888d\") " pod="openshift-console/downloads-7954f5f757-nqbcs" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.076148 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.076267 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.576248789 +0000 UTC m=+153.436112092 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.076746 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.076784 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.077608 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.577592925 +0000 UTC m=+153.437456238 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.077803 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nqbcs" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.079161 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2rn8\" (UniqueName: \"kubernetes.io/projected/6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7-kube-api-access-s2rn8\") pod \"authentication-operator-69f744f599-pzmdh\" (UID: \"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.085809 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.089988 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.099957 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.110737 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.115594 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66hfp\" (UniqueName: \"kubernetes.io/projected/97b454ad-ff7c-4c7b-9d53-79b92b7520de-kube-api-access-66hfp\") pod \"apiserver-76f77b778f-8b44g\" (UID: \"97b454ad-ff7c-4c7b-9d53-79b92b7520de\") " pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.130488 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.151509 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.153699 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.173805 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.177952 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.179223 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.679172561 +0000 UTC m=+153.539035874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.185611 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.242521 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e-certs\") pod \"machine-config-server-s9pkn\" (UID: \"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e\") " pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.243194 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flqmx\" (UniqueName: \"kubernetes.io/projected/fda084f4-b624-4036-8da8-27d83af188ba-kube-api-access-flqmx\") pod \"route-controller-manager-6576b87f9c-fgrh2\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.243352 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e-node-bootstrap-token\") pod \"machine-config-server-s9pkn\" (UID: \"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e\") " pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.278908 4746 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.290729 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.291092 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.791064432 +0000 UTC m=+153.650927745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.291366 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.314806 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-bound-sa-token\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.315414 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5eef2155-a824-4d97-a67e-d2c19aaecbd6-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-tx2q9\" (UID: \"5eef2155-a824-4d97-a67e-d2c19aaecbd6\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.317308 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z59cm\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-kube-api-access-z59cm\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.323391 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8m7\" (UniqueName: \"kubernetes.io/projected/eeec20bb-7eb5-48b5-81c8-ba8ded6347e4-kube-api-access-tb8m7\") pod \"catalog-operator-68c6474976-zlj7m\" (UID: \"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.346670 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmnf9\" (UniqueName: \"kubernetes.io/projected/75c281af-5b44-463c-9f6f-cb666090c7c6-kube-api-access-zmnf9\") pod \"ingress-canary-nh8tz\" (UID: \"75c281af-5b44-463c-9f6f-cb666090c7c6\") " pod="openshift-ingress-canary/ingress-canary-nh8tz" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.350740 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.354207 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" event={"ID":"a124ef34-55e9-4974-95ba-4b55ac4ad5ce","Type":"ContainerStarted","Data":"fbf5916c93485b404e6fcfbd0c9a4e426fe4c7c2918ca9ecd5a93c215e4defff"} Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.359028 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" event={"ID":"13c5b559-f24d-4e4e-a905-80a3da8dd577","Type":"ContainerStarted","Data":"48da84c93e320f94b923e4d30999a30c0a5e46c31f3d1daa71f9e6075615434b"} Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.362526 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdbvg\" (UniqueName: \"kubernetes.io/projected/6890a664-38ec-4702-b9db-7bbc19fe5aae-kube-api-access-pdbvg\") pod \"router-default-5444994796-5bgx9\" (UID: \"6890a664-38ec-4702-b9db-7bbc19fe5aae\") " pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.367817 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n4pc\" (UniqueName: \"kubernetes.io/projected/2e68686f-8a09-4477-8d31-3e1e762d06d3-kube-api-access-4n4pc\") pod \"service-ca-operator-777779d784-8c2w5\" (UID: \"2e68686f-8a09-4477-8d31-3e1e762d06d3\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.405585 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-nh8tz" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.406386 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.406809 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.407300 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:20.907269761 +0000 UTC m=+153.767133074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.411172 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.414885 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtf6w\" (UniqueName: \"kubernetes.io/projected/9dd7dcf7-5174-44fb-b164-38de3c8788ad-kube-api-access-xtf6w\") pod \"csi-hostpathplugin-m8w9j\" (UID: \"9dd7dcf7-5174-44fb-b164-38de3c8788ad\") " pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.415212 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" event={"ID":"fce8473a-0f95-4788-af68-35b608885c41","Type":"ContainerStarted","Data":"0b922ac29e8ceebaad15c797eb6ff159b9b297b6449107675f847fec73d7091c"} Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.416920 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6fwv\" (UniqueName: \"kubernetes.io/projected/0178df62-7497-43fc-b639-3f44170cff1c-kube-api-access-x6fwv\") pod \"package-server-manager-789f6589d5-dhjm8\" (UID: \"0178df62-7497-43fc-b639-3f44170cff1c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.442119 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7q58\" (UniqueName: \"kubernetes.io/projected/ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e-kube-api-access-p7q58\") pod \"machine-config-server-s9pkn\" (UID: \"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e\") " pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.469155 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.481210 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz6gl\" (UniqueName: \"kubernetes.io/projected/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-kube-api-access-zz6gl\") pod \"collect-profiles-29424105-lrv4h\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.496034 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.499100 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpcdw\" (UniqueName: \"kubernetes.io/projected/a00f4c19-cef1-4b49-9e22-090e3cf5f2bd-kube-api-access-jpcdw\") pod \"dns-operator-744455d44c-7hm64\" (UID: \"a00f4c19-cef1-4b49-9e22-090e3cf5f2bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.504791 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.508336 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.508912 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.008896047 +0000 UTC m=+153.868759430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.555478 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.569268 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5503b821-54ed-4fd8-a336-ea952f033321-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-s2jbs\" (UID: \"5503b821-54ed-4fd8-a336-ea952f033321\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.573443 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmxmz\" (UniqueName: \"kubernetes.io/projected/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-kube-api-access-gmxmz\") pod \"marketplace-operator-79b997595-krdwz\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.582445 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9sgh\" (UniqueName: \"kubernetes.io/projected/4b01d4aa-94e7-48da-86e7-6e5685e259b5-kube-api-access-c9sgh\") pod \"apiserver-7bbb656c7d-w96w4\" (UID: \"4b01d4aa-94e7-48da-86e7-6e5685e259b5\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.582770 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.582970 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfzv7\" (UniqueName: \"kubernetes.io/projected/196232fe-f052-4cfb-8ecc-102b9411a95f-kube-api-access-mfzv7\") pod \"dns-default-952k5\" (UID: \"196232fe-f052-4cfb-8ecc-102b9411a95f\") " pod="openshift-dns/dns-default-952k5" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.658491 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.659783 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-952k5" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.660665 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.660781 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4phs\" (UniqueName: \"kubernetes.io/projected/068ad235-6423-470d-9085-8e4536f18109-kube-api-access-x4phs\") pod \"service-ca-9c57cc56f-vjs55\" (UID: \"068ad235-6423-470d-9085-8e4536f18109\") " pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.660961 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.160944231 +0000 UTC m=+154.020807544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.661071 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.661908 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.161896346 +0000 UTC m=+154.021759659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.681123 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-s9pkn" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.683271 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fczdw\" (UniqueName: \"kubernetes.io/projected/ce723dd2-6ea2-49d1-9faf-c92026630754-kube-api-access-fczdw\") pod \"controller-manager-879f6c89f-26ppb\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.694434 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.710036 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t45tb\" (UniqueName: \"kubernetes.io/projected/9a59886a-3dad-4b64-b432-db95667e0bdc-kube-api-access-t45tb\") pod \"kube-storage-version-migrator-operator-b67b599dd-ntcrz\" (UID: \"9a59886a-3dad-4b64-b432-db95667e0bdc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.718674 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftpgw\" (UniqueName: \"kubernetes.io/projected/847c9b9b-d231-4694-8229-0730ba158052-kube-api-access-ftpgw\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.718939 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lw2q\" (UniqueName: \"kubernetes.io/projected/247537bc-cda2-416f-8040-ecd313916cf2-kube-api-access-8lw2q\") pod \"packageserver-d55dfcdfc-4hf8x\" (UID: \"247537bc-cda2-416f-8040-ecd313916cf2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.721582 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/847c9b9b-d231-4694-8229-0730ba158052-bound-sa-token\") pod \"ingress-operator-5b745b69d9-q9jb9\" (UID: \"847c9b9b-d231-4694-8229-0730ba158052\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.726944 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgfcd\" (UniqueName: \"kubernetes.io/projected/c498d893-da54-4a88-9314-93fdfaaf130d-kube-api-access-wgfcd\") pod \"etcd-operator-b45778765-66mrd\" (UID: \"c498d893-da54-4a88-9314-93fdfaaf130d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.757802 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62dj6\" (UniqueName: \"kubernetes.io/projected/aee37070-f695-4b53-aaac-c41b538f28f6-kube-api-access-62dj6\") pod \"openshift-controller-manager-operator-756b6f6bc6-tsvvn\" (UID: \"aee37070-f695-4b53-aaac-c41b538f28f6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.761774 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.763076 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9b68\" (UniqueName: \"kubernetes.io/projected/26ce87dd-769c-424f-8865-6e2c8d81ee39-kube-api-access-d9b68\") pod \"migrator-59844c95c7-sf4lq\" (UID: \"26ce87dd-769c-424f-8865-6e2c8d81ee39\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq" Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.763965 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.263943144 +0000 UTC m=+154.123806457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.764369 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.764774 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.264765816 +0000 UTC m=+154.124629129 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.774240 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8hdj\" (UniqueName: \"kubernetes.io/projected/e95e0ed3-6195-4925-abbb-116520ae5098-kube-api-access-n8hdj\") pod \"machine-config-operator-74547568cd-m8ht8\" (UID: \"e95e0ed3-6195-4925-abbb-116520ae5098\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.793658 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.809093 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.817682 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.835343 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.844901 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.853773 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.871665 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.872018 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.371990692 +0000 UTC m=+154.231854015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.875337 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.968710 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.972601 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:20 crc kubenswrapper[4746]: E1211 09:56:20.972872 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.472862148 +0000 UTC m=+154.332725461 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:20 crc kubenswrapper[4746]: I1211 09:56:20.974792 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.116859 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.117395 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.117636 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.118073 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.118381 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.618360306 +0000 UTC m=+154.478223619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.118435 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.118736 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.618723396 +0000 UTC m=+154.478586709 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.156799 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bsdlm"] Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.241097 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.241379 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.741347097 +0000 UTC m=+154.601210410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.241448 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.242310 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.742299863 +0000 UTC m=+154.602163176 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.343109 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.343526 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.843510928 +0000 UTC m=+154.703374241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: W1211 09:56:21.440843 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1362f4c2_fffd_4e0f_9a2c_07fe8666d6be.slice/crio-44503e52249b22e6efb8680dfa10aa57bfd4c9628deb78540d6f1347383760d6 WatchSource:0}: Error finding container 44503e52249b22e6efb8680dfa10aa57bfd4c9628deb78540d6f1347383760d6: Status 404 returned error can't find the container with id 44503e52249b22e6efb8680dfa10aa57bfd4c9628deb78540d6f1347383760d6 Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.444155 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.444595 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:21.944580558 +0000 UTC m=+154.804443881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.455230 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-s9pkn" event={"ID":"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e","Type":"ContainerStarted","Data":"c9c1a458a66cf202a5301ba0c06de4e7e18670a9a691985a0b8aab5a47721fdf"} Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.475458 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5bgx9" event={"ID":"6890a664-38ec-4702-b9db-7bbc19fe5aae","Type":"ContainerStarted","Data":"642682ef6047aa391191e91f9230cd48309fbbf4986675f9d6f89a720bb17440"} Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.477163 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.545615 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.545790 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.045771204 +0000 UTC m=+154.905634517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.670766 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.671411 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.171399815 +0000 UTC m=+155.031263128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.683826 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6b7h4" podStartSLOduration=132.683809659 podStartE2EDuration="2m12.683809659s" podCreationTimestamp="2025-12-11 09:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:21.679551005 +0000 UTC m=+154.539414318" watchObservedRunningTime="2025-12-11 09:56:21.683809659 +0000 UTC m=+154.543672972" Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.697172 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2"] Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.772986 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.773447 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.273428852 +0000 UTC m=+155.133292165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.773667 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.774103 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.27409336 +0000 UTC m=+155.133956673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.876447 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.876676 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.376651381 +0000 UTC m=+155.236514694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.876941 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.877323 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.377304598 +0000 UTC m=+155.237167911 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.972080 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-g2qhz" podStartSLOduration=132.972031119 podStartE2EDuration="2m12.972031119s" podCreationTimestamp="2025-12-11 09:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:21.971386492 +0000 UTC m=+154.831249825" watchObservedRunningTime="2025-12-11 09:56:21.972031119 +0000 UTC m=+154.831894422" Dec 11 09:56:21 crc kubenswrapper[4746]: I1211 09:56:21.979143 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:21 crc kubenswrapper[4746]: E1211 09:56:21.980340 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.480319923 +0000 UTC m=+155.340183236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:21 crc kubenswrapper[4746]: W1211 09:56:21.995979 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfda084f4_b624_4036_8da8_27d83af188ba.slice/crio-89e41111299184017949e1927ba3b04a87b8963d07a410317b00012b7b679b06 WatchSource:0}: Error finding container 89e41111299184017949e1927ba3b04a87b8963d07a410317b00012b7b679b06: Status 404 returned error can't find the container with id 89e41111299184017949e1927ba3b04a87b8963d07a410317b00012b7b679b06 Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.003104 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wnp8m" podStartSLOduration=132.003085285 podStartE2EDuration="2m12.003085285s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:22.001172854 +0000 UTC m=+154.861036177" watchObservedRunningTime="2025-12-11 09:56:22.003085285 +0000 UTC m=+154.862948598" Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.178421 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.178927 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" podStartSLOduration=132.178910459 podStartE2EDuration="2m12.178910459s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:22.177668756 +0000 UTC m=+155.037532079" watchObservedRunningTime="2025-12-11 09:56:22.178910459 +0000 UTC m=+155.038773772" Dec 11 09:56:22 crc kubenswrapper[4746]: E1211 09:56:22.178958 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.67894319 +0000 UTC m=+155.538806503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:22 crc kubenswrapper[4746]: E1211 09:56:22.291942 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.791913761 +0000 UTC m=+155.651777074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.292458 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.292808 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:22 crc kubenswrapper[4746]: E1211 09:56:22.293221 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.793204996 +0000 UTC m=+155.653068309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.420328 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:22 crc kubenswrapper[4746]: E1211 09:56:22.420747 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:22.920730799 +0000 UTC m=+155.780594102 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.526911 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:22 crc kubenswrapper[4746]: E1211 09:56:22.527328 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:23.02731345 +0000 UTC m=+155.887176763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.545000 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bsdlm" event={"ID":"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be","Type":"ContainerStarted","Data":"44503e52249b22e6efb8680dfa10aa57bfd4c9628deb78540d6f1347383760d6"} Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.546130 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" event={"ID":"fda084f4-b624-4036-8da8-27d83af188ba","Type":"ContainerStarted","Data":"89e41111299184017949e1927ba3b04a87b8963d07a410317b00012b7b679b06"} Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.549662 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" event={"ID":"13c5b559-f24d-4e4e-a905-80a3da8dd577","Type":"ContainerStarted","Data":"e2008994e2b653d7b616d0094228591a02636cb1b26abf389352429c156aac65"} Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.553104 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5bgx9" event={"ID":"6890a664-38ec-4702-b9db-7bbc19fe5aae","Type":"ContainerStarted","Data":"92c6a94f71e535c6916d36155d9b17c23b6d10e5031977091c57cb475e56b98b"} Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.618762 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-zkxvv" podStartSLOduration=132.618741971 podStartE2EDuration="2m12.618741971s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:22.540503144 +0000 UTC m=+155.400366457" watchObservedRunningTime="2025-12-11 09:56:22.618741971 +0000 UTC m=+155.478605284" Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.630519 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:22 crc kubenswrapper[4746]: E1211 09:56:22.631860 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:23.131845193 +0000 UTC m=+155.991708506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.778120 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:22 crc kubenswrapper[4746]: E1211 09:56:22.778538 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:23.278523722 +0000 UTC m=+156.138387045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.803249 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pzmdh"] Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.942943 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:22 crc kubenswrapper[4746]: E1211 09:56:22.943418 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:23.44334412 +0000 UTC m=+156.303207433 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:22 crc kubenswrapper[4746]: I1211 09:56:22.971346 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-8ctft" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.047516 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:23 crc kubenswrapper[4746]: E1211 09:56:23.048212 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:23.548198743 +0000 UTC m=+156.408062056 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.148594 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:23 crc kubenswrapper[4746]: E1211 09:56:23.149000 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:23.648978096 +0000 UTC m=+156.508841409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.214650 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-lwl94" podStartSLOduration=133.214632193 podStartE2EDuration="2m13.214632193s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:23.213235536 +0000 UTC m=+156.073098869" watchObservedRunningTime="2025-12-11 09:56:23.214632193 +0000 UTC m=+156.074495516" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.250893 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:23 crc kubenswrapper[4746]: E1211 09:56:23.253322 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:23.753302714 +0000 UTC m=+156.613166027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.358184 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:23 crc kubenswrapper[4746]: E1211 09:56:23.358938 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:23.858922319 +0000 UTC m=+156.718785632 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.379685 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5bgx9" podStartSLOduration=133.379669417 podStartE2EDuration="2m13.379669417s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:23.377072077 +0000 UTC m=+156.236935390" watchObservedRunningTime="2025-12-11 09:56:23.379669417 +0000 UTC m=+156.239532730" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.461751 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:23 crc kubenswrapper[4746]: E1211 09:56:23.462069 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:23.962056245 +0000 UTC m=+156.821919558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.497694 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.501191 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:23 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:23 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:23 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.501261 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.513169 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-4n677"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.537197 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.562831 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-czfmv"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.562872 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" event={"ID":"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7","Type":"ContainerStarted","Data":"fa5c2303a924e622aab24dc81cb955d73f9697fa26d754c5336b729bee603e9d"} Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.563285 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:23 crc kubenswrapper[4746]: E1211 09:56:23.563537 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:24.063520187 +0000 UTC m=+156.923383500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.564205 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4n677" event={"ID":"2d6e68f4-a35b-43d1-b1fb-95600add4933","Type":"ContainerStarted","Data":"aa7efdda5bafdc0e004dd0119ad8863d9e3f399fb3020ed679a0364b61dedc27"} Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.565141 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nqbcs"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.565976 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" event={"ID":"13c5b559-f24d-4e4e-a905-80a3da8dd577","Type":"ContainerStarted","Data":"009a7d66ba802a85cdcbcc2e29e04b912dbbf461d88a52afe796bed7396bb0e3"} Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.567469 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-s9pkn" event={"ID":"ddfd0bc6-1f0f-487c-9465-f7b4b63a2b1e","Type":"ContainerStarted","Data":"9132aaf964576ebedd15383a235fa33d236f123e6789f179c7c88985decf88b3"} Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.569233 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bsdlm" event={"ID":"1362f4c2-fffd-4e0f-9a2c-07fe8666d6be","Type":"ContainerStarted","Data":"850bdb602786dbd86b0e104bc2381c799e566570bd2e314ac43b3b13db719294"} Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.569975 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.571325 4746 patch_prober.go:28] interesting pod/console-operator-58897d9998-bsdlm container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.571371 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bsdlm" podUID="1362f4c2-fffd-4e0f-9a2c-07fe8666d6be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.572121 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" event={"ID":"fcffdcae-c940-447e-8b52-e6ba9df066cd","Type":"ContainerStarted","Data":"8d81b786904853777b73fafe4c0baa0a424b5a68769297c62bb533466ffafa61"} Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.579957 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" event={"ID":"fda084f4-b624-4036-8da8-27d83af188ba","Type":"ContainerStarted","Data":"c45c54b7abf7af48cb84f22474ff44540a7f1f5d16f48000e7b7515718fe0433"} Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.580624 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.587125 4746 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-fgrh2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.587175 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" podUID="fda084f4-b624-4036-8da8-27d83af188ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.598557 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-s5g5g" podStartSLOduration=134.59853074 podStartE2EDuration="2m14.59853074s" podCreationTimestamp="2025-12-11 09:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:23.594671866 +0000 UTC m=+156.454535189" watchObservedRunningTime="2025-12-11 09:56:23.59853074 +0000 UTC m=+156.458394063" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.620817 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" podStartSLOduration=133.620798739 podStartE2EDuration="2m13.620798739s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:23.618498287 +0000 UTC m=+156.478361600" watchObservedRunningTime="2025-12-11 09:56:23.620798739 +0000 UTC m=+156.480662052" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.628564 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.666795 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:23 crc kubenswrapper[4746]: E1211 09:56:23.671501 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:24.171485594 +0000 UTC m=+157.031348907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.677914 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-s9pkn" podStartSLOduration=6.677896336 podStartE2EDuration="6.677896336s" podCreationTimestamp="2025-12-11 09:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:23.676707095 +0000 UTC m=+156.536570408" watchObservedRunningTime="2025-12-11 09:56:23.677896336 +0000 UTC m=+156.537759639" Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.686693 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bsdlm" podStartSLOduration=133.686672503 podStartE2EDuration="2m13.686672503s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:23.650973381 +0000 UTC m=+156.510836694" watchObservedRunningTime="2025-12-11 09:56:23.686672503 +0000 UTC m=+156.546535816" Dec 11 09:56:23 crc kubenswrapper[4746]: W1211 09:56:23.694868 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3534a1e6_5e3c_4ae5_a981_228d9ae0d5bb.slice/crio-eb060efa0d25ce1476b18de6171c2eb760086cac6ac131812ac487a2f7acff5b WatchSource:0}: Error finding container eb060efa0d25ce1476b18de6171c2eb760086cac6ac131812ac487a2f7acff5b: Status 404 returned error can't find the container with id eb060efa0d25ce1476b18de6171c2eb760086cac6ac131812ac487a2f7acff5b Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.745333 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.767600 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.769426 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:23 crc kubenswrapper[4746]: E1211 09:56:23.770433 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:24.270412786 +0000 UTC m=+157.130276099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.771596 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-m8w9j"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.782863 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-8b44g"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.783995 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.784092 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9"] Dec 11 09:56:23 crc kubenswrapper[4746]: W1211 09:56:23.801708 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97b454ad_ff7c_4c7b_9d53_79b92b7520de.slice/crio-40721b430bf07e4b1259e8c564ec39d6cff4ad41a76057a77354963d23d0c3cb WatchSource:0}: Error finding container 40721b430bf07e4b1259e8c564ec39d6cff4ad41a76057a77354963d23d0c3cb: Status 404 returned error can't find the container with id 40721b430bf07e4b1259e8c564ec39d6cff4ad41a76057a77354963d23d0c3cb Dec 11 09:56:23 crc kubenswrapper[4746]: W1211 09:56:23.804649 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeeec20bb_7eb5_48b5_81c8_ba8ded6347e4.slice/crio-0065f6c5c49a64e16ab43484cdc3936c72e682b5f7cdce4383883cc4da969bde WatchSource:0}: Error finding container 0065f6c5c49a64e16ab43484cdc3936c72e682b5f7cdce4383883cc4da969bde: Status 404 returned error can't find the container with id 0065f6c5c49a64e16ab43484cdc3936c72e682b5f7cdce4383883cc4da969bde Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.804801 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-krdwz"] Dec 11 09:56:23 crc kubenswrapper[4746]: W1211 09:56:23.835313 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0178df62_7497_43fc_b639_3f44170cff1c.slice/crio-fe8928cb88da92a9174c85c09a3ac198f4af10f97fb71671bc96e13b3a83df6a WatchSource:0}: Error finding container fe8928cb88da92a9174c85c09a3ac198f4af10f97fb71671bc96e13b3a83df6a: Status 404 returned error can't find the container with id fe8928cb88da92a9174c85c09a3ac198f4af10f97fb71671bc96e13b3a83df6a Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.875906 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:23 crc kubenswrapper[4746]: E1211 09:56:23.876234 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:24.376223646 +0000 UTC m=+157.236086959 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.944066 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.962464 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.962788 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.979269 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:23 crc kubenswrapper[4746]: E1211 09:56:23.980063 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:24.48002284 +0000 UTC m=+157.339886143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.984184 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-t5lpk"] Dec 11 09:56:23 crc kubenswrapper[4746]: I1211 09:56:23.994741 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.006285 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-nh8tz"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.018281 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-952k5"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.026833 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.080863 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.081257 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:24.581244535 +0000 UTC m=+157.441107848 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:24 crc kubenswrapper[4746]: W1211 09:56:24.084921 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod196232fe_f052_4cfb_8ecc_102b9411a95f.slice/crio-98d9e2d19abd8cb1a0c142192ebdf3e17c18df091f9895f5b0ee5610001cfd0c WatchSource:0}: Error finding container 98d9e2d19abd8cb1a0c142192ebdf3e17c18df091f9895f5b0ee5610001cfd0c: Status 404 returned error can't find the container with id 98d9e2d19abd8cb1a0c142192ebdf3e17c18df091f9895f5b0ee5610001cfd0c Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.148645 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-7hm64"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.157158 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.183498 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.183857 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:24.683842258 +0000 UTC m=+157.543705571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.185623 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.201552 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.207097 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vjs55"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.221029 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-66mrd"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.225039 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.247168 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-26ppb"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.251943 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.284906 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.285363 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:24.7853488 +0000 UTC m=+157.645212113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:24 crc kubenswrapper[4746]: W1211 09:56:24.312061 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9c295d4_1896_4e5e_989e_2d0a3eb9b07e.slice/crio-1923c28a18f89b8fa90dcbd554d5fc9660521905f18f91e172d1da1d63990ea9 WatchSource:0}: Error finding container 1923c28a18f89b8fa90dcbd554d5fc9660521905f18f91e172d1da1d63990ea9: Status 404 returned error can't find the container with id 1923c28a18f89b8fa90dcbd554d5fc9660521905f18f91e172d1da1d63990ea9 Dec 11 09:56:24 crc kubenswrapper[4746]: W1211 09:56:24.319364 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaee37070_f695_4b53_aaac_c41b538f28f6.slice/crio-6ffe997758de2923aec18bab11fbeb0086b39a5aa939e944a8f24d2945082169 WatchSource:0}: Error finding container 6ffe997758de2923aec18bab11fbeb0086b39a5aa939e944a8f24d2945082169: Status 404 returned error can't find the container with id 6ffe997758de2923aec18bab11fbeb0086b39a5aa939e944a8f24d2945082169 Dec 11 09:56:24 crc kubenswrapper[4746]: W1211 09:56:24.339679 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod068ad235_6423_470d_9085_8e4536f18109.slice/crio-ce857b03dba28f967f50054f7077cd91e0aec028d7e0bd1e2f5def8849cc7218 WatchSource:0}: Error finding container ce857b03dba28f967f50054f7077cd91e0aec028d7e0bd1e2f5def8849cc7218: Status 404 returned error can't find the container with id ce857b03dba28f967f50054f7077cd91e0aec028d7e0bd1e2f5def8849cc7218 Dec 11 09:56:24 crc kubenswrapper[4746]: W1211 09:56:24.341290 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod247537bc_cda2_416f_8040_ecd313916cf2.slice/crio-7cc3d65b2f65760c1d15567822234eacb06562ef63910b144f183a6762c3fc55 WatchSource:0}: Error finding container 7cc3d65b2f65760c1d15567822234eacb06562ef63910b144f183a6762c3fc55: Status 404 returned error can't find the container with id 7cc3d65b2f65760c1d15567822234eacb06562ef63910b144f183a6762c3fc55 Dec 11 09:56:24 crc kubenswrapper[4746]: W1211 09:56:24.343497 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc498d893_da54_4a88_9314_93fdfaaf130d.slice/crio-60119ee8a146f94b7117bf6dd2046b6c432bfae89b37c37c7fbe22a0a94847ab WatchSource:0}: Error finding container 60119ee8a146f94b7117bf6dd2046b6c432bfae89b37c37c7fbe22a0a94847ab: Status 404 returned error can't find the container with id 60119ee8a146f94b7117bf6dd2046b6c432bfae89b37c37c7fbe22a0a94847ab Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.385712 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.386087 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:24.886057722 +0000 UTC m=+157.745921035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.487393 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.487714 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:24.987698299 +0000 UTC m=+157.847561612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.502714 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:24 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:24 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:24 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.503104 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.589341 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.590442 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.590530 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.591313 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.591662 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.091637097 +0000 UTC m=+157.951500430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.594479 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.595127 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.09510647 +0000 UTC m=+157.954969803 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.597475 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.597515 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.630993 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" event={"ID":"9a59886a-3dad-4b64-b432-db95667e0bdc","Type":"ContainerStarted","Data":"965d59353e02f0eb33bab4ecedb9004ab0484451149cb35931425f5a41b12a26"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.634601 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" event={"ID":"0178df62-7497-43fc-b639-3f44170cff1c","Type":"ContainerStarted","Data":"62798400f5c64b85b3aef52ac4a1b5e5b8fe80d5c14f7ae526856426ba9d6d01"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.634647 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" event={"ID":"0178df62-7497-43fc-b639-3f44170cff1c","Type":"ContainerStarted","Data":"fe8928cb88da92a9174c85c09a3ac198f4af10f97fb71671bc96e13b3a83df6a"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.636487 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" event={"ID":"4b01d4aa-94e7-48da-86e7-6e5685e259b5","Type":"ContainerStarted","Data":"3c9bced77efb56d6e663ee328c0d589acc05404ec8d22b5700381cceed61192f"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.639447 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" event={"ID":"3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb","Type":"ContainerStarted","Data":"f4263c7cc6ff0cba5b23ac643c82a039f6f2c8faf863ec9e6349dc396f07e38e"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.639505 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" event={"ID":"3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb","Type":"ContainerStarted","Data":"eb060efa0d25ce1476b18de6171c2eb760086cac6ac131812ac487a2f7acff5b"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.644633 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nh8tz" event={"ID":"75c281af-5b44-463c-9f6f-cb666090c7c6","Type":"ContainerStarted","Data":"783ecc5619172140805672eda3f88d18b0c2faa9fb5392c3b930a150a9762ec7"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.664676 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-4t7sz" podStartSLOduration=134.664658303 podStartE2EDuration="2m14.664658303s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:24.662591117 +0000 UTC m=+157.522454430" watchObservedRunningTime="2025-12-11 09:56:24.664658303 +0000 UTC m=+157.524521616" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.742986 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.743353 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.743504 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.743551 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.243519796 +0000 UTC m=+158.103383109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.743609 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.744082 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.244074791 +0000 UTC m=+158.103938104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.745987 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" event={"ID":"a00f4c19-cef1-4b49-9e22-090e3cf5f2bd","Type":"ContainerStarted","Data":"3b709f8fc4f7feee80c3ac97d6de3fc49d8b0c10973c51fbffd51780fde81f7a"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.751094 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" event={"ID":"068ad235-6423-470d-9085-8e4536f18109","Type":"ContainerStarted","Data":"ce857b03dba28f967f50054f7077cd91e0aec028d7e0bd1e2f5def8849cc7218"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.756520 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" event={"ID":"ce723dd2-6ea2-49d1-9faf-c92026630754","Type":"ContainerStarted","Data":"f9b4bcbb5b008e9b2aa418df7ae96d6871aa27feee08be2c4cbcde9a94ee0d0a"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.759476 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" event={"ID":"c498d893-da54-4a88-9314-93fdfaaf130d","Type":"ContainerStarted","Data":"60119ee8a146f94b7117bf6dd2046b6c432bfae89b37c37c7fbe22a0a94847ab"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.762568 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" event={"ID":"5eef2155-a824-4d97-a67e-d2c19aaecbd6","Type":"ContainerStarted","Data":"dc3aca355ea2ac815795eee2decfef2c30b25564d7750d804094a445b9c14fcb"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.787827 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" podStartSLOduration=134.787808688 podStartE2EDuration="2m14.787808688s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:24.785754243 +0000 UTC m=+157.645617556" watchObservedRunningTime="2025-12-11 09:56:24.787808688 +0000 UTC m=+157.647672031" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.793348 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" event={"ID":"247537bc-cda2-416f-8040-ecd313916cf2","Type":"ContainerStarted","Data":"7cc3d65b2f65760c1d15567822234eacb06562ef63910b144f183a6762c3fc55"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.805167 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nqbcs" event={"ID":"2895076f-4e51-4f1c-ae8b-e8e9d1b8888d","Type":"ContainerStarted","Data":"bcfb119c6ca60f2f256406f80a4d614d7c19b115819c0aa9ea28c87b6f8b3ac0"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.805275 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nqbcs" event={"ID":"2895076f-4e51-4f1c-ae8b-e8e9d1b8888d","Type":"ContainerStarted","Data":"d34912f44d45b91cc644c30278dc36693ba2ade53e0c7c80942f1a183deb12a1"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.806496 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-nqbcs" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.808666 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" event={"ID":"6f3d51f0-1ac7-40c1-bc4c-0f3fc1ccf7f7","Type":"ContainerStarted","Data":"cafeae0b0fe2769f3e731cf94e8f73ada31d84a865495185cd7642e3f5cf6064"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.822015 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq" event={"ID":"26ce87dd-769c-424f-8865-6e2c8d81ee39","Type":"ContainerStarted","Data":"94130ced92e8b55c399b1da35a107b0388d6af55fad1fbd643dc4279e0b82f1e"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.822997 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.823067 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.830958 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-nqbcs" podStartSLOduration=134.83093817 podStartE2EDuration="2m14.83093817s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:24.827359033 +0000 UTC m=+157.687222346" watchObservedRunningTime="2025-12-11 09:56:24.83093817 +0000 UTC m=+157.690801483" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.832644 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" event={"ID":"6cae8380-85f7-4534-9bfc-46c5a3d6711f","Type":"ContainerStarted","Data":"cf603fe439c3bb316a24cb00d8cd008a30d5d63f3751fea73968b6b8db446c18"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.832739 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" event={"ID":"6cae8380-85f7-4534-9bfc-46c5a3d6711f","Type":"ContainerStarted","Data":"e96e6ac187d2015be30d93701f63f32eec5308a45ef8d4c4e50ebfa015299656"} Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.833397 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.846357 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pzmdh" podStartSLOduration=135.846340304 podStartE2EDuration="2m15.846340304s" podCreationTimestamp="2025-12-11 09:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:24.84210736 +0000 UTC m=+157.701970683" watchObservedRunningTime="2025-12-11 09:56:24.846340304 +0000 UTC m=+157.706203617" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.847241 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.847373 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.847428 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.848280 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.848371 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.348356748 +0000 UTC m=+158.208220061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.877344 4746 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-czfmv container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.877406 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.912211 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.938290 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 09:56:24 crc kubenswrapper[4746]: I1211 09:56:24.955979 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:24 crc kubenswrapper[4746]: E1211 09:56:24.982990 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.482969652 +0000 UTC m=+158.342832975 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.026940 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" event={"ID":"fcffdcae-c940-447e-8b52-e6ba9df066cd","Type":"ContainerStarted","Data":"8a38bf3dcab0a2c1cde152eff96204153ef4214d206b92c2d2d7dd70c55cffee"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.026983 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" event={"ID":"fcffdcae-c940-447e-8b52-e6ba9df066cd","Type":"ContainerStarted","Data":"9e6bfcb792544b1fff7eb3f86216741c12c7a89de9187aec279afc11b6dd46d1"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.074272 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:25 crc kubenswrapper[4746]: E1211 09:56:25.074559 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.574543368 +0000 UTC m=+158.434406681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.082117 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-952k5" event={"ID":"196232fe-f052-4cfb-8ecc-102b9411a95f","Type":"ContainerStarted","Data":"98d9e2d19abd8cb1a0c142192ebdf3e17c18df091f9895f5b0ee5610001cfd0c"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.086976 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4n677" event={"ID":"2d6e68f4-a35b-43d1-b1fb-95600add4933","Type":"ContainerStarted","Data":"de013b745135eaaea2b7ece20fe353bdc01d6305907e4afb794ed63e4650d8a7"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.088029 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" event={"ID":"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6","Type":"ContainerStarted","Data":"bf6e87a740bd07f7a1e7dbaaed2fff8903b5b07bce374c480cdb74ffa3da3c8c"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.088169 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" event={"ID":"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6","Type":"ContainerStarted","Data":"f2b4b13db9a44d2b3c29481de915df800378fc11cdd45fbebab590643b5f130b"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.088651 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.099351 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" event={"ID":"847c9b9b-d231-4694-8229-0730ba158052","Type":"ContainerStarted","Data":"ecab029182d8eb86588f7bdbbabe1a9e3d84cdc3aafc2a017759a2392599ff90"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.100844 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" event={"ID":"5503b821-54ed-4fd8-a336-ea952f033321","Type":"ContainerStarted","Data":"833fd1eaf4c458910c2a74f13a8bf288da3f8db7e07953601070cd5de2fd3d30"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.102019 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" event={"ID":"aee37070-f695-4b53-aaac-c41b538f28f6","Type":"ContainerStarted","Data":"6ffe997758de2923aec18bab11fbeb0086b39a5aa939e944a8f24d2945082169"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.102791 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" event={"ID":"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4","Type":"ContainerStarted","Data":"f4b626b2d19817337610ff326fc23819b99ffa5434d5175a416365900e021ef0"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.102808 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" event={"ID":"eeec20bb-7eb5-48b5-81c8-ba8ded6347e4","Type":"ContainerStarted","Data":"0065f6c5c49a64e16ab43484cdc3936c72e682b5f7cdce4383883cc4da969bde"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.103443 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.104029 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" event={"ID":"d1fbd58c-6a34-456a-a1d6-854c25fb0d9f","Type":"ContainerStarted","Data":"cfc9cd3ad7ecd5fda5d1dc6b23c57c9861a753308b46a731ab99df5d937b640f"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.104658 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" event={"ID":"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e","Type":"ContainerStarted","Data":"1923c28a18f89b8fa90dcbd554d5fc9660521905f18f91e172d1da1d63990ea9"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.105230 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" event={"ID":"9dd7dcf7-5174-44fb-b164-38de3c8788ad","Type":"ContainerStarted","Data":"9c1ec24c543482190f290402143b0175f9d7b33568531c02e0a789f1d44b8ced"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.144378 4746 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-zlj7m container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.144431 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" podUID="eeec20bb-7eb5-48b5-81c8-ba8ded6347e4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.146566 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" event={"ID":"97b454ad-ff7c-4c7b-9d53-79b92b7520de","Type":"ContainerStarted","Data":"40721b430bf07e4b1259e8c564ec39d6cff4ad41a76057a77354963d23d0c3cb"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.157776 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" event={"ID":"e95e0ed3-6195-4925-abbb-116520ae5098","Type":"ContainerStarted","Data":"a92d3e3994efbb4a61bb42003ade34b5081f602df68c818a1f873f02dba524b7"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.158679 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" event={"ID":"49f71235-96fa-4452-8805-d08461253a1f","Type":"ContainerStarted","Data":"f55e5ac7e704b94299430f7dc46b8e67cd91e5fc1edd14fd94523de64865a4f0"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.160247 4746 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-krdwz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.160285 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" podUID="09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.160777 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" event={"ID":"2e68686f-8a09-4477-8d31-3e1e762d06d3","Type":"ContainerStarted","Data":"abf85d73a2f387bbdc708114331da8e5bcf74ae466378f856abb7a5e296176f5"} Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.164324 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" podStartSLOduration=136.164261214 podStartE2EDuration="2m16.164261214s" podCreationTimestamp="2025-12-11 09:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:24.86254719 +0000 UTC m=+157.722410513" watchObservedRunningTime="2025-12-11 09:56:25.164261214 +0000 UTC m=+158.024124537" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.166339 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sxscv" podStartSLOduration=135.166326709 podStartE2EDuration="2m15.166326709s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:25.164065669 +0000 UTC m=+158.023928992" watchObservedRunningTime="2025-12-11 09:56:25.166326709 +0000 UTC m=+158.026190022" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.169854 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.177086 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:25 crc kubenswrapper[4746]: E1211 09:56:25.177394 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.677379737 +0000 UTC m=+158.537243050 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.209422 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" podStartSLOduration=135.209401969 podStartE2EDuration="2m15.209401969s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:25.20686787 +0000 UTC m=+158.066731193" watchObservedRunningTime="2025-12-11 09:56:25.209401969 +0000 UTC m=+158.069265282" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.246918 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bsdlm" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.280552 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:25 crc kubenswrapper[4746]: E1211 09:56:25.281915 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.781901151 +0000 UTC m=+158.641764464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.298313 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" podStartSLOduration=135.298299223 podStartE2EDuration="2m15.298299223s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:25.297440819 +0000 UTC m=+158.157304132" watchObservedRunningTime="2025-12-11 09:56:25.298299223 +0000 UTC m=+158.158162536" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.349809 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" podStartSLOduration=135.349792098 podStartE2EDuration="2m15.349792098s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:25.347674382 +0000 UTC m=+158.207537695" watchObservedRunningTime="2025-12-11 09:56:25.349792098 +0000 UTC m=+158.209655411" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.386357 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:25 crc kubenswrapper[4746]: E1211 09:56:25.386665 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.886653761 +0000 UTC m=+158.746517074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.409331 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-4n677" podStartSLOduration=135.409315251 podStartE2EDuration="2m15.409315251s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:25.386797475 +0000 UTC m=+158.246660788" watchObservedRunningTime="2025-12-11 09:56:25.409315251 +0000 UTC m=+158.269178564" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.487630 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:25 crc kubenswrapper[4746]: E1211 09:56:25.487934 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:25.987920258 +0000 UTC m=+158.847783571 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.540532 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:25 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:25 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:25 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.540585 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.589721 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:25 crc kubenswrapper[4746]: E1211 09:56:25.590038 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:26.090027117 +0000 UTC m=+158.949890420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.691178 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:25 crc kubenswrapper[4746]: E1211 09:56:25.691492 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:26.191459707 +0000 UTC m=+159.051323040 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.691873 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:25 crc kubenswrapper[4746]: E1211 09:56:25.692360 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:26.192344281 +0000 UTC m=+159.052207604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.797170 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:25 crc kubenswrapper[4746]: E1211 09:56:25.797982 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:26.297961165 +0000 UTC m=+159.157824478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:25 crc kubenswrapper[4746]: I1211 09:56:25.898985 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:25 crc kubenswrapper[4746]: E1211 09:56:25.899585 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:26.39957183 +0000 UTC m=+159.259435143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.000222 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:26 crc kubenswrapper[4746]: E1211 09:56:26.000893 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:26.500875727 +0000 UTC m=+159.360739060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.230267 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:26 crc kubenswrapper[4746]: E1211 09:56:26.230592 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:26.730575963 +0000 UTC m=+159.590439276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.314436 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" event={"ID":"247537bc-cda2-416f-8040-ecd313916cf2","Type":"ContainerStarted","Data":"fa288a3411b49570a208f6142556120990a8a150dbfe7be3eacb4bbd45ff251c"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.315510 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.331073 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.331259 4746 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4hf8x container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.331321 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" podUID="247537bc-cda2-416f-8040-ecd313916cf2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Dec 11 09:56:26 crc kubenswrapper[4746]: E1211 09:56:26.331441 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:26.831423598 +0000 UTC m=+159.691286911 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.339154 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" event={"ID":"847c9b9b-d231-4694-8229-0730ba158052","Type":"ContainerStarted","Data":"86d58b72592be72504d0cad0629ffaac72f45fb0983bc6d3297e3de9b37008b4"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.357898 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" podStartSLOduration=136.357880699 podStartE2EDuration="2m16.357880699s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:26.355952127 +0000 UTC m=+159.215815440" watchObservedRunningTime="2025-12-11 09:56:26.357880699 +0000 UTC m=+159.217744012" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.391000 4746 generic.go:334] "Generic (PLEG): container finished" podID="97b454ad-ff7c-4c7b-9d53-79b92b7520de" containerID="21f1964f61204e54a6619f140dad54fac2d11f870eba82416c9ac95fbfd68043" exitCode=0 Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.391194 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" event={"ID":"97b454ad-ff7c-4c7b-9d53-79b92b7520de","Type":"ContainerDied","Data":"21f1964f61204e54a6619f140dad54fac2d11f870eba82416c9ac95fbfd68043"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.454309 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:26 crc kubenswrapper[4746]: E1211 09:56:26.454894 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:26.954878031 +0000 UTC m=+159.814741344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.460717 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-nh8tz" event={"ID":"75c281af-5b44-463c-9f6f-cb666090c7c6","Type":"ContainerStarted","Data":"9594995c4f6c439c7a174c2fd361447b3069c8ccef1822dada5127250017b4be"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.482293 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" event={"ID":"9a59886a-3dad-4b64-b432-db95667e0bdc","Type":"ContainerStarted","Data":"ae62a3520c0a4065d2387e043d171a7c6d081d51849abed71bef48937edb297b"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.486827 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq" event={"ID":"26ce87dd-769c-424f-8865-6e2c8d81ee39","Type":"ContainerStarted","Data":"a0928ca6122d2c36836e061a16fee5171f56fd1e798f5dcac3bca8cc1dfc776a"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.487700 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" event={"ID":"e95e0ed3-6195-4925-abbb-116520ae5098","Type":"ContainerStarted","Data":"18fd0df46af41dc0074f7ce928573c765db8e9546edc19741727c731ce32eec3"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.488552 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-tx2q9" event={"ID":"5eef2155-a824-4d97-a67e-d2c19aaecbd6","Type":"ContainerStarted","Data":"65963404178ef48d6e8306887be95d35d0db5d59e495fbb06aa0aff34b63a796"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.489991 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" event={"ID":"d1fbd58c-6a34-456a-a1d6-854c25fb0d9f","Type":"ContainerStarted","Data":"ebf7e35c0ecb170d05d4565dd4ca5bda9277df7379be808a07741bba5de3abf6"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.490811 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" event={"ID":"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e","Type":"ContainerStarted","Data":"be661d318b8360bf8675bf87e0885727febb81c4d6a663a4d5a6e0a298e257bc"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.492311 4746 generic.go:334] "Generic (PLEG): container finished" podID="4b01d4aa-94e7-48da-86e7-6e5685e259b5" containerID="84bcca3703df7065e3c843126e89b82309c8e8dc624d67bd161bf533444eb6be" exitCode=0 Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.492350 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" event={"ID":"4b01d4aa-94e7-48da-86e7-6e5685e259b5","Type":"ContainerDied","Data":"84bcca3703df7065e3c843126e89b82309c8e8dc624d67bd161bf533444eb6be"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.494948 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-s2jbs" event={"ID":"5503b821-54ed-4fd8-a336-ea952f033321","Type":"ContainerStarted","Data":"1f295a86e4ffed4fddef112c322159ba3f0c845f72a67346c4d96ad556c06050"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.504792 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" event={"ID":"49f71235-96fa-4452-8805-d08461253a1f","Type":"ContainerStarted","Data":"7a422cbeb45c1aeebe76a8d2b9177d7db2447e2279c3e2a27f3ed610d07571a5"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.506715 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.510542 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:26 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:26 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:26 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.510599 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.510865 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" event={"ID":"2e68686f-8a09-4477-8d31-3e1e762d06d3","Type":"ContainerStarted","Data":"66fef9df35400e713216ba054207ff021078535ce81f9b0b04ab7aeb4b166b4d"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.518792 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-952k5" event={"ID":"196232fe-f052-4cfb-8ecc-102b9411a95f","Type":"ContainerStarted","Data":"0779cd30e83024a079c1a91f1767224f2e0b90ec3b7d40ebcc0a21252a34fe8c"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.541430 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" event={"ID":"0178df62-7497-43fc-b639-3f44170cff1c","Type":"ContainerStarted","Data":"14a93abf3887857691732ed9e8d1f439520b8cc108b38773e439d422f2d1c91e"} Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.541759 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.543089 4746 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-krdwz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.543123 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" podUID="09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.543352 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.543389 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.643108 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:26 crc kubenswrapper[4746]: E1211 09:56:26.643217 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.143197041 +0000 UTC m=+160.003060354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.648575 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-nh8tz" podStartSLOduration=9.648545265 podStartE2EDuration="9.648545265s" podCreationTimestamp="2025-12-11 09:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:26.642242116 +0000 UTC m=+159.502105429" watchObservedRunningTime="2025-12-11 09:56:26.648545265 +0000 UTC m=+159.508408578" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.651591 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.725177 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:56:26 crc kubenswrapper[4746]: E1211 09:56:26.726443 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.226409692 +0000 UTC m=+160.086273175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.738425 4746 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mdgqr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.738469 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" podUID="49f71235-96fa-4452-8805-d08461253a1f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.755016 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-ntcrz" podStartSLOduration=136.754991001 podStartE2EDuration="2m16.754991001s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:26.746079211 +0000 UTC m=+159.605942554" watchObservedRunningTime="2025-12-11 09:56:26.754991001 +0000 UTC m=+159.614854314" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.772603 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:26 crc kubenswrapper[4746]: E1211 09:56:26.772777 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.272746189 +0000 UTC m=+160.132609502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.773344 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:26 crc kubenswrapper[4746]: E1211 09:56:26.773867 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.273850369 +0000 UTC m=+160.133713682 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.799186 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-zlj7m" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.813344 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" podStartSLOduration=136.813327812 podStartE2EDuration="2m16.813327812s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:26.812473088 +0000 UTC m=+159.672336401" watchObservedRunningTime="2025-12-11 09:56:26.813327812 +0000 UTC m=+159.673191125" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.843258 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.870463 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.874203 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:26 crc kubenswrapper[4746]: E1211 09:56:26.879796 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.37977123 +0000 UTC m=+160.239634543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.887572 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:26 crc kubenswrapper[4746]: E1211 09:56:26.888267 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.388252359 +0000 UTC m=+160.248115672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.914804 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" podStartSLOduration=136.914789493 podStartE2EDuration="2m16.914789493s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:26.91358455 +0000 UTC m=+159.773447873" watchObservedRunningTime="2025-12-11 09:56:26.914789493 +0000 UTC m=+159.774652806" Dec 11 09:56:26 crc kubenswrapper[4746]: I1211 09:56:26.916227 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" podStartSLOduration=136.916220622 podStartE2EDuration="2m16.916220622s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:26.867405307 +0000 UTC m=+159.727268630" watchObservedRunningTime="2025-12-11 09:56:26.916220622 +0000 UTC m=+159.776083935" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:26.989244 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:26.990311 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.490291436 +0000 UTC m=+160.350154749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.055119 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-8c2w5" podStartSLOduration=137.05510178 podStartE2EDuration="2m17.05510178s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:26.942203422 +0000 UTC m=+159.802066735" watchObservedRunningTime="2025-12-11 09:56:27.05510178 +0000 UTC m=+159.914965103" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.091563 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:27.092021 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.592005135 +0000 UTC m=+160.451868448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.200777 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:27.201868 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.701841211 +0000 UTC m=+160.561704524 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.306586 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:27.307191 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.807169727 +0000 UTC m=+160.667033040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.409032 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:27.409674 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:27.909653027 +0000 UTC m=+160.769516340 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.507245 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:27 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:27 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:27 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.508408 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.521948 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:27.522605 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.022577517 +0000 UTC m=+160.882440830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.584552 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" event={"ID":"847c9b9b-d231-4694-8229-0730ba158052","Type":"ContainerStarted","Data":"5bbf7815a4793904b03d51d7a3adde8ef353596ea5657c55f88936e3f64b1fe1"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.621067 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" event={"ID":"97b454ad-ff7c-4c7b-9d53-79b92b7520de","Type":"ContainerStarted","Data":"76da3f6eef1e1c30076c8e32bb8536186f2fd3fa8dc4dde07b24811c9d3a674d"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.624175 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-q9jb9" podStartSLOduration=137.623277608 podStartE2EDuration="2m17.623277608s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:27.617772229 +0000 UTC m=+160.477635542" watchObservedRunningTime="2025-12-11 09:56:27.623277608 +0000 UTC m=+160.483140921" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.624635 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:27.625264 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.125243491 +0000 UTC m=+160.985106804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.724842 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" event={"ID":"c498d893-da54-4a88-9314-93fdfaaf130d","Type":"ContainerStarted","Data":"795d2c66b26b9973902acf83a167da88efa47d4150880fe06458c18ff125191c"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.734029 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:27.734357 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.234345058 +0000 UTC m=+161.094208361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.758823 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-952k5" event={"ID":"196232fe-f052-4cfb-8ecc-102b9411a95f","Type":"ContainerStarted","Data":"ade0e40eebc34a66806b07d9a574152a8d3f778f0b018eaac9b4d57132241d40"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.758865 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-952k5" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.778283 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" event={"ID":"a00f4c19-cef1-4b49-9e22-090e3cf5f2bd","Type":"ContainerStarted","Data":"db491d110068ad660ca9ea1729f58f65bd37b5cf799246ff6276be0872a528f0"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.795358 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" event={"ID":"e95e0ed3-6195-4925-abbb-116520ae5098","Type":"ContainerStarted","Data":"03788ecd194b2fff83d2da3869908c2322bf4d4a30263a55ed7db982ad233a02"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.797562 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" event={"ID":"068ad235-6423-470d-9085-8e4536f18109","Type":"ContainerStarted","Data":"d4417fcc78d9fd7f90f83ba3c80a95a20ce0bf7bcbf7a8b6aca25fa60ff422e4"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.799063 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" event={"ID":"aee37070-f695-4b53-aaac-c41b538f28f6","Type":"ContainerStarted","Data":"2bc6f0f90426a7981dc27141705b14495d0dfa41fa2498fa8c551c97826b5b31"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.806183 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" event={"ID":"ce723dd2-6ea2-49d1-9faf-c92026630754","Type":"ContainerStarted","Data":"666ce9fd7b9dd4cb5863d39532feb20a4ac288eba70cd1f8db4bde8a63a12cee"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.807068 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.834589 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:27.834768 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.334738541 +0000 UTC m=+161.194601854 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.834921 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:27.836753 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.336741705 +0000 UTC m=+161.196605018 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.843224 4746 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-26ppb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.843295 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" podUID="ce723dd2-6ea2-49d1-9faf-c92026630754" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.850385 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq" event={"ID":"26ce87dd-769c-424f-8865-6e2c8d81ee39","Type":"ContainerStarted","Data":"81f8a81c1537e0c10f3c9967b70e0642e4f85c73efcc26b4320a1bc4375902b6"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.863119 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" event={"ID":"d1fbd58c-6a34-456a-a1d6-854c25fb0d9f","Type":"ContainerStarted","Data":"35364223f45a915cf31ab2a857e8d7163f0dc8b1066a6518554af9755c135a32"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.891715 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec","Type":"ContainerStarted","Data":"586c3d83c46ac14020ce65302909804f95331da1b7664dbad2aada5de5c60eda"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.909684 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" event={"ID":"4b01d4aa-94e7-48da-86e7-6e5685e259b5","Type":"ContainerStarted","Data":"030dc436a89258df61ce45d9afa5f8ef3556db1aeb736c667f552ce3a73fdb7a"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.914670 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" event={"ID":"9dd7dcf7-5174-44fb-b164-38de3c8788ad","Type":"ContainerStarted","Data":"599f23e9e308dedb03110cdd4d33e9b3638fcff2a7627cf9e8c0560274ecb554"} Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.916250 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.916311 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.916611 4746 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4hf8x container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" start-of-body= Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.916638 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" podUID="247537bc-cda2-416f-8040-ecd313916cf2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": dial tcp 10.217.0.33:5443: connect: connection refused" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.917546 4746 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-krdwz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.917581 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" podUID="09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.919396 4746 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-mdgqr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" start-of-body= Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.919447 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" podUID="49f71235-96fa-4452-8805-d08461253a1f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.24:8443/healthz\": dial tcp 10.217.0.24:8443: connect: connection refused" Dec 11 09:56:27 crc kubenswrapper[4746]: I1211 09:56:27.936970 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:27 crc kubenswrapper[4746]: E1211 09:56:27.938536 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.438497664 +0000 UTC m=+161.298361017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.040619 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.044986 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.544973581 +0000 UTC m=+161.404836894 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.113700 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-m8ht8" podStartSLOduration=138.113685111 podStartE2EDuration="2m18.113685111s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:28.111261096 +0000 UTC m=+160.971124409" watchObservedRunningTime="2025-12-11 09:56:28.113685111 +0000 UTC m=+160.973548424" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.163384 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.163527 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.663507663 +0000 UTC m=+161.523370976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.163671 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.163926 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.663919383 +0000 UTC m=+161.523782696 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.206007 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-sf4lq" podStartSLOduration=138.205992356 podStartE2EDuration="2m18.205992356s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:28.203482699 +0000 UTC m=+161.063346022" watchObservedRunningTime="2025-12-11 09:56:28.205992356 +0000 UTC m=+161.065855669" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.206401 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-66mrd" podStartSLOduration=138.206397108 podStartE2EDuration="2m18.206397108s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:28.139225449 +0000 UTC m=+160.999088772" watchObservedRunningTime="2025-12-11 09:56:28.206397108 +0000 UTC m=+161.066260421" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.237331 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vjs55" podStartSLOduration=138.2373162 podStartE2EDuration="2m18.2373162s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:28.237238728 +0000 UTC m=+161.097102041" watchObservedRunningTime="2025-12-11 09:56:28.2373162 +0000 UTC m=+161.097179513" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.269419 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.269876 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.769856596 +0000 UTC m=+161.629719909 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.337678 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-t5lpk" podStartSLOduration=138.337660431 podStartE2EDuration="2m18.337660431s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:28.28188265 +0000 UTC m=+161.141745963" watchObservedRunningTime="2025-12-11 09:56:28.337660431 +0000 UTC m=+161.197523744" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.339929 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pldtc"] Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.340972 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.347615 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.361111 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pldtc"] Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.375071 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.375474 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.875457789 +0000 UTC m=+161.735321102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.378017 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" podStartSLOduration=138.378003107 podStartE2EDuration="2m18.378003107s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:28.374679258 +0000 UTC m=+161.234542571" watchObservedRunningTime="2025-12-11 09:56:28.378003107 +0000 UTC m=+161.237866420" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.452944 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" podStartSLOduration=138.452923045 podStartE2EDuration="2m18.452923045s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:28.452642607 +0000 UTC m=+161.312505920" watchObservedRunningTime="2025-12-11 09:56:28.452923045 +0000 UTC m=+161.312786368" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.477734 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.477835 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.977819775 +0000 UTC m=+161.837683088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.478101 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bkdz\" (UniqueName: \"kubernetes.io/projected/9be0459f-f161-4203-868a-ba2d577c96d1-kube-api-access-2bkdz\") pod \"community-operators-pldtc\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.478181 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-utilities\") pod \"community-operators-pldtc\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.478225 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-catalog-content\") pod \"community-operators-pldtc\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.478251 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.478525 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:28.978517103 +0000 UTC m=+161.838380416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.497889 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-tsvvn" podStartSLOduration=138.497867175 podStartE2EDuration="2m18.497867175s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:28.496085977 +0000 UTC m=+161.355949310" watchObservedRunningTime="2025-12-11 09:56:28.497867175 +0000 UTC m=+161.357730498" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.507443 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:28 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:28 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:28 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.507536 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.520923 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vq5fs"] Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.521851 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.528385 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.542201 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-952k5" podStartSLOduration=11.542182967 podStartE2EDuration="11.542182967s" podCreationTimestamp="2025-12-11 09:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:28.539402283 +0000 UTC m=+161.399265596" watchObservedRunningTime="2025-12-11 09:56:28.542182967 +0000 UTC m=+161.402046280" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.582617 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vq5fs"] Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.583255 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.584813 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.084779485 +0000 UTC m=+161.944642798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.588859 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-catalog-content\") pod \"certified-operators-vq5fs\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.589017 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-catalog-content\") pod \"community-operators-pldtc\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.589241 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.589404 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bkdz\" (UniqueName: \"kubernetes.io/projected/9be0459f-f161-4203-868a-ba2d577c96d1-kube-api-access-2bkdz\") pod \"community-operators-pldtc\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.589539 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ghz8\" (UniqueName: \"kubernetes.io/projected/492f0d06-a8d7-4f81-a86a-5f1caea059cd-kube-api-access-2ghz8\") pod \"certified-operators-vq5fs\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.589663 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-utilities\") pod \"community-operators-pldtc\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.592448 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.09242679 +0000 UTC m=+161.952290153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.595737 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-utilities\") pod \"certified-operators-vq5fs\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.593080 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-catalog-content\") pod \"community-operators-pldtc\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.611506 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-utilities\") pod \"community-operators-pldtc\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.675252 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bkdz\" (UniqueName: \"kubernetes.io/projected/9be0459f-f161-4203-868a-ba2d577c96d1-kube-api-access-2bkdz\") pod \"community-operators-pldtc\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.698534 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.698720 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.198691151 +0000 UTC m=+162.058554464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.698856 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-utilities\") pod \"certified-operators-vq5fs\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.698918 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-catalog-content\") pod \"certified-operators-vq5fs\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.698973 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.699035 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ghz8\" (UniqueName: \"kubernetes.io/projected/492f0d06-a8d7-4f81-a86a-5f1caea059cd-kube-api-access-2ghz8\") pod \"certified-operators-vq5fs\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.699525 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-catalog-content\") pod \"certified-operators-vq5fs\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.699655 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.199641367 +0000 UTC m=+162.059504750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.699744 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-utilities\") pod \"certified-operators-vq5fs\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.749974 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ghz8\" (UniqueName: \"kubernetes.io/projected/492f0d06-a8d7-4f81-a86a-5f1caea059cd-kube-api-access-2ghz8\") pod \"certified-operators-vq5fs\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.776752 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rkx2w"] Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.778618 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.799915 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.800343 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.300304067 +0000 UTC m=+162.160167380 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.858455 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.894990 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkx2w"] Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.904324 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-utilities\") pod \"community-operators-rkx2w\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.904640 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5phx\" (UniqueName: \"kubernetes.io/projected/d5155500-7423-4b61-8724-8172a012cd8a-kube-api-access-c5phx\") pod \"community-operators-rkx2w\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.904785 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-catalog-content\") pod \"community-operators-rkx2w\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.904970 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:28 crc kubenswrapper[4746]: E1211 09:56:28.905486 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.405469849 +0000 UTC m=+162.265333232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.941780 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-s7kv7"] Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.943170 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:28 crc kubenswrapper[4746]: I1211 09:56:28.961677 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.032834 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.033149 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-utilities\") pod \"community-operators-rkx2w\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.033195 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5phx\" (UniqueName: \"kubernetes.io/projected/d5155500-7423-4b61-8724-8172a012cd8a-kube-api-access-c5phx\") pod \"community-operators-rkx2w\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.033230 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-catalog-content\") pod \"community-operators-rkx2w\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.033842 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-catalog-content\") pod \"community-operators-rkx2w\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.034108 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.534087442 +0000 UTC m=+162.393950755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.034461 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-utilities\") pod \"community-operators-rkx2w\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.109537 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec","Type":"ContainerStarted","Data":"9eaad2b70094d8ab88d83d6718b9ab23acca69aa451b83344125eeead72343bc"} Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.113174 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" event={"ID":"a00f4c19-cef1-4b49-9e22-090e3cf5f2bd","Type":"ContainerStarted","Data":"4e37cac8902c680b9b6653f0eb557594eff25a9d3f94421d5cadd8089d94f684"} Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.123340 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" event={"ID":"97b454ad-ff7c-4c7b-9d53-79b92b7520de","Type":"ContainerStarted","Data":"57031d6cb3cf7f12f394ba3929e6e1fd948bd0fdda2e3081b3cfea4d9bc97761"} Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.124593 4746 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-26ppb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.124703 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" podUID="ce723dd2-6ea2-49d1-9faf-c92026630754" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.124711 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5phx\" (UniqueName: \"kubernetes.io/projected/d5155500-7423-4b61-8724-8172a012cd8a-kube-api-access-c5phx\") pod \"community-operators-rkx2w\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.143391 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.143450 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-catalog-content\") pod \"certified-operators-s7kv7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.143477 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-utilities\") pod \"certified-operators-s7kv7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.143502 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f67fc\" (UniqueName: \"kubernetes.io/projected/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-kube-api-access-f67fc\") pod \"certified-operators-s7kv7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.144060 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.644028571 +0000 UTC m=+162.503891884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.172104 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s7kv7"] Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.174271 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=5.174234005 podStartE2EDuration="5.174234005s" podCreationTimestamp="2025-12-11 09:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:29.137864965 +0000 UTC m=+161.997728298" watchObservedRunningTime="2025-12-11 09:56:29.174234005 +0000 UTC m=+162.034097318" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.244787 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-7hm64" podStartSLOduration=139.244769733 podStartE2EDuration="2m19.244769733s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:29.24462807 +0000 UTC m=+162.104491403" watchObservedRunningTime="2025-12-11 09:56:29.244769733 +0000 UTC m=+162.104633046" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.245236 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.245839 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" podStartSLOduration=140.245834392 podStartE2EDuration="2m20.245834392s" podCreationTimestamp="2025-12-11 09:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:29.212851004 +0000 UTC m=+162.072714307" watchObservedRunningTime="2025-12-11 09:56:29.245834392 +0000 UTC m=+162.105697705" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.245874 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-catalog-content\") pod \"certified-operators-s7kv7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.245935 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-utilities\") pod \"certified-operators-s7kv7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.246035 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f67fc\" (UniqueName: \"kubernetes.io/projected/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-kube-api-access-f67fc\") pod \"certified-operators-s7kv7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.247448 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.747433965 +0000 UTC m=+162.607297288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.251489 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-utilities\") pod \"certified-operators-s7kv7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.251572 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-catalog-content\") pod \"certified-operators-s7kv7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.305094 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f67fc\" (UniqueName: \"kubernetes.io/projected/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-kube-api-access-f67fc\") pod \"certified-operators-s7kv7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.347956 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.348260 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.848248739 +0000 UTC m=+162.708112042 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.404571 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.461567 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.461872 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:29.961858748 +0000 UTC m=+162.821722061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.464132 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-mdgqr" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.564835 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.565251 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:30.065233941 +0000 UTC m=+162.925097254 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.569253 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:29 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:29 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:29 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.569299 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.595438 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.668650 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.669252 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:30.169232981 +0000 UTC m=+163.029096304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.773023 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.773767 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:30.273752425 +0000 UTC m=+163.133615738 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.873948 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.874129 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:30.374099587 +0000 UTC m=+163.233962900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.874441 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.874745 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:30.374733384 +0000 UTC m=+163.234596697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.878109 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.878156 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.910474 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vq5fs"] Dec 11 09:56:29 crc kubenswrapper[4746]: I1211 09:56:29.976307 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:29 crc kubenswrapper[4746]: E1211 09:56:29.976683 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:30.476651558 +0000 UTC m=+163.336514871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.011967 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.012736 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.088374 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:30 crc kubenswrapper[4746]: E1211 09:56:30.088673 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:30.588663283 +0000 UTC m=+163.448526596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.094728 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.094781 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.094861 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.094906 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.109346 4746 patch_prober.go:28] interesting pod/console-f9d7485db-4n677 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.109411 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4n677" podUID="2d6e68f4-a35b-43d1-b1fb-95600add4933" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.112761 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pldtc"] Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.190414 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:30 crc kubenswrapper[4746]: E1211 09:56:30.193512 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:30.693487836 +0000 UTC m=+163.553351139 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.217985 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" event={"ID":"9dd7dcf7-5174-44fb-b164-38de3c8788ad","Type":"ContainerStarted","Data":"208b25b6f458358cec7d21d04dbda22190c93f832df91195c5cfd0624f2b393e"} Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.244115 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq5fs" event={"ID":"492f0d06-a8d7-4f81-a86a-5f1caea059cd","Type":"ContainerStarted","Data":"f8dec152ec4c959816278e452780b8cfb5efc5fbbc0e308e763bd684709d6cf5"} Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.252116 4746 generic.go:334] "Generic (PLEG): container finished" podID="7eec3d9f-436e-4d57-8bed-e1ed3bf328ec" containerID="9eaad2b70094d8ab88d83d6718b9ab23acca69aa451b83344125eeead72343bc" exitCode=0 Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.253492 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec","Type":"ContainerDied","Data":"9eaad2b70094d8ab88d83d6718b9ab23acca69aa451b83344125eeead72343bc"} Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.309464 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:30 crc kubenswrapper[4746]: E1211 09:56:30.310742 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:30.810730833 +0000 UTC m=+163.670594146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.319770 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.410080 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.410114 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.410394 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:30 crc kubenswrapper[4746]: E1211 09:56:30.410695 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:30.910674223 +0000 UTC m=+163.770537536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.420908 4746 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8b44g container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.13:8443/livez\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.420957 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" podUID="97b454ad-ff7c-4c7b-9d53-79b92b7520de" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.13:8443/livez\": dial tcp 10.217.0.13:8443: connect: connection refused" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.434324 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.437767 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.444214 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.444670 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.445530 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.485114 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rkx2w"] Dec 11 09:56:30 crc kubenswrapper[4746]: W1211 09:56:30.490660 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5155500_7423_4b61_8724_8172a012cd8a.slice/crio-af5c9bc7936d4d954939f787e6a3b4f9573e45e13006c0c7ae995eaca3d318f2 WatchSource:0}: Error finding container af5c9bc7936d4d954939f787e6a3b4f9573e45e13006c0c7ae995eaca3d318f2: Status 404 returned error can't find the container with id af5c9bc7936d4d954939f787e6a3b4f9573e45e13006c0c7ae995eaca3d318f2 Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.496666 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pvzvj"] Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.509462 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:30 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:30 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:30 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.509537 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.519935 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d90e3835-f259-4c44-b968-a7d90ef7dda5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d90e3835-f259-4c44-b968-a7d90ef7dda5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.520017 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d90e3835-f259-4c44-b968-a7d90ef7dda5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d90e3835-f259-4c44-b968-a7d90ef7dda5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.520076 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:30 crc kubenswrapper[4746]: E1211 09:56:30.520541 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.02052322 +0000 UTC m=+163.880386533 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.544175 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.544974 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.547695 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.553184 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvzvj"] Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.603891 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.627699 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.627983 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-catalog-content\") pod \"redhat-marketplace-pvzvj\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.628039 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-utilities\") pod \"redhat-marketplace-pvzvj\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.628091 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw8xr\" (UniqueName: \"kubernetes.io/projected/611ec549-a1d2-4b9a-b67e-2216ec21327a-kube-api-access-mw8xr\") pod \"redhat-marketplace-pvzvj\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.628124 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d90e3835-f259-4c44-b968-a7d90ef7dda5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d90e3835-f259-4c44-b968-a7d90ef7dda5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 09:56:30 crc kubenswrapper[4746]: E1211 09:56:30.628163 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.128124037 +0000 UTC m=+163.987987350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.628415 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d90e3835-f259-4c44-b968-a7d90ef7dda5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d90e3835-f259-4c44-b968-a7d90ef7dda5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.628509 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:30 crc kubenswrapper[4746]: E1211 09:56:30.629039 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.129030382 +0000 UTC m=+163.988893685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.630231 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d90e3835-f259-4c44-b968-a7d90ef7dda5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"d90e3835-f259-4c44-b968-a7d90ef7dda5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.727627 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d90e3835-f259-4c44-b968-a7d90ef7dda5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"d90e3835-f259-4c44-b968-a7d90ef7dda5\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.729362 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.729484 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw8xr\" (UniqueName: \"kubernetes.io/projected/611ec549-a1d2-4b9a-b67e-2216ec21327a-kube-api-access-mw8xr\") pod \"redhat-marketplace-pvzvj\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.729566 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-catalog-content\") pod \"redhat-marketplace-pvzvj\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.729608 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-utilities\") pod \"redhat-marketplace-pvzvj\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.729935 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-utilities\") pod \"redhat-marketplace-pvzvj\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:30 crc kubenswrapper[4746]: E1211 09:56:30.730008 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.22999657 +0000 UTC m=+164.089859883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.730539 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-catalog-content\") pod \"redhat-marketplace-pvzvj\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.833395 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.834692 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:30 crc kubenswrapper[4746]: E1211 09:56:30.834989 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.334978517 +0000 UTC m=+164.194841830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.836876 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.836922 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.870144 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw8xr\" (UniqueName: \"kubernetes.io/projected/611ec549-a1d2-4b9a-b67e-2216ec21327a-kube-api-access-mw8xr\") pod \"redhat-marketplace-pvzvj\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.945677 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:30 crc kubenswrapper[4746]: E1211 09:56:30.946940 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.446923971 +0000 UTC m=+164.306787284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.950202 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9dz"] Dec 11 09:56:30 crc kubenswrapper[4746]: I1211 09:56:30.965548 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.010292 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.015990 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-s7kv7"] Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.042375 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9dz"] Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.051293 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.051591 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.551580449 +0000 UTC m=+164.411443762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.154254 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.154439 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.654414037 +0000 UTC m=+164.514277350 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.154515 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-catalog-content\") pod \"redhat-marketplace-5p9dz\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.154664 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.154775 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xghhn\" (UniqueName: \"kubernetes.io/projected/7ea31c3c-c4a5-47be-bddf-49b679f030d6-kube-api-access-xghhn\") pod \"redhat-marketplace-5p9dz\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.154818 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-utilities\") pod \"redhat-marketplace-5p9dz\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.154947 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.654940022 +0000 UTC m=+164.514803325 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.255742 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.255904 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.755871229 +0000 UTC m=+164.615734552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.255963 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xghhn\" (UniqueName: \"kubernetes.io/projected/7ea31c3c-c4a5-47be-bddf-49b679f030d6-kube-api-access-xghhn\") pod \"redhat-marketplace-5p9dz\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.256145 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-utilities\") pod \"redhat-marketplace-5p9dz\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.256220 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-catalog-content\") pod \"redhat-marketplace-5p9dz\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.257327 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-utilities\") pod \"redhat-marketplace-5p9dz\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.257388 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-catalog-content\") pod \"redhat-marketplace-5p9dz\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.269674 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq5fs" event={"ID":"492f0d06-a8d7-4f81-a86a-5f1caea059cd","Type":"ContainerStarted","Data":"8fe538cc9f6f9bda884a3906a8dcfc213fd9a768d915c3b587d251c1d1095578"} Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.270607 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7kv7" event={"ID":"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7","Type":"ContainerStarted","Data":"c08b0720ec61b611df7c4e1c0b37df23b6a9bd3902c3260cfeb4a9bab5bec398"} Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.271427 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkx2w" event={"ID":"d5155500-7423-4b61-8724-8172a012cd8a","Type":"ContainerStarted","Data":"af5c9bc7936d4d954939f787e6a3b4f9573e45e13006c0c7ae995eaca3d318f2"} Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.272798 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pldtc" event={"ID":"9be0459f-f161-4203-868a-ba2d577c96d1","Type":"ContainerStarted","Data":"2014e9a604626b16cf8f0a0ed2ca8cd2ff5a44629abec5b44dda4927dca4b0d2"} Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.280039 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xghhn\" (UniqueName: \"kubernetes.io/projected/7ea31c3c-c4a5-47be-bddf-49b679f030d6-kube-api-access-xghhn\") pod \"redhat-marketplace-5p9dz\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.288530 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 11 09:56:31 crc kubenswrapper[4746]: W1211 09:56:31.295737 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podd90e3835_f259_4c44_b968_a7d90ef7dda5.slice/crio-a2c96ea60c261242636fce03f25087e1e5ac0cc279e8a943e5e778031341f98e WatchSource:0}: Error finding container a2c96ea60c261242636fce03f25087e1e5ac0cc279e8a943e5e778031341f98e: Status 404 returned error can't find the container with id a2c96ea60c261242636fce03f25087e1e5ac0cc279e8a943e5e778031341f98e Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.299894 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.356804 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.357563 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.857547406 +0000 UTC m=+164.717410719 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.409880 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvzvj"] Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.457749 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.457974 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:31.957956099 +0000 UTC m=+164.817819412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.483101 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-whqdv"] Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.484576 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.494874 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.503911 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:31 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:31 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:31 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.503961 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.508254 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whqdv"] Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.558772 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-utilities\") pod \"redhat-operators-whqdv\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.558998 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8wm4\" (UniqueName: \"kubernetes.io/projected/5669b14f-d850-4cfc-a7ae-18f880dbccb5-kube-api-access-m8wm4\") pod \"redhat-operators-whqdv\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.559255 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-catalog-content\") pod \"redhat-operators-whqdv\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.559385 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.559732 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.059720419 +0000 UTC m=+164.919583732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.573595 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9dz"] Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.660718 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.660902 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.160880383 +0000 UTC m=+165.020743686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.661004 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-utilities\") pod \"redhat-operators-whqdv\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.661151 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8wm4\" (UniqueName: \"kubernetes.io/projected/5669b14f-d850-4cfc-a7ae-18f880dbccb5-kube-api-access-m8wm4\") pod \"redhat-operators-whqdv\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.661189 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-catalog-content\") pod \"redhat-operators-whqdv\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.661250 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.661504 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.161496159 +0000 UTC m=+165.021359472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.661819 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-utilities\") pod \"redhat-operators-whqdv\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.661821 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-catalog-content\") pod \"redhat-operators-whqdv\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.761859 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.762025 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.261987735 +0000 UTC m=+165.121851058 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.762351 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.762647 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.262635983 +0000 UTC m=+165.122499296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.863468 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.865595 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.363890378 +0000 UTC m=+165.223753721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.865888 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.866415 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.366396506 +0000 UTC m=+165.226259859 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.886827 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bgg6h"] Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.887911 4746 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-26ppb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.888035 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" podUID="ce723dd2-6ea2-49d1-9faf-c92026630754" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.891878 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.903143 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bgg6h"] Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.967378 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.967531 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh759\" (UniqueName: \"kubernetes.io/projected/cccdab48-aeee-44f8-aebc-90129170ea8a-kube-api-access-lh759\") pod \"redhat-operators-bgg6h\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.967627 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-utilities\") pod \"redhat-operators-bgg6h\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:31 crc kubenswrapper[4746]: I1211 09:56:31.967691 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-catalog-content\") pod \"redhat-operators-bgg6h\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:31 crc kubenswrapper[4746]: E1211 09:56:31.967799 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.467779685 +0000 UTC m=+165.327642998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.069087 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-utilities\") pod \"redhat-operators-bgg6h\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.069145 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.069196 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-catalog-content\") pod \"redhat-operators-bgg6h\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.069231 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh759\" (UniqueName: \"kubernetes.io/projected/cccdab48-aeee-44f8-aebc-90129170ea8a-kube-api-access-lh759\") pod \"redhat-operators-bgg6h\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.069985 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-utilities\") pod \"redhat-operators-bgg6h\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.070253 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.570240904 +0000 UTC m=+165.430104217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.070586 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-catalog-content\") pod \"redhat-operators-bgg6h\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.090410 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh759\" (UniqueName: \"kubernetes.io/projected/cccdab48-aeee-44f8-aebc-90129170ea8a-kube-api-access-lh759\") pod \"redhat-operators-bgg6h\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.119873 4746 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4hf8x container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.119931 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" podUID="247537bc-cda2-416f-8040-ecd313916cf2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.120215 4746 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-4hf8x container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:5443/healthz\": context deadline exceeded" start-of-body= Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.120236 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" podUID="247537bc-cda2-416f-8040-ecd313916cf2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.33:5443/healthz\": context deadline exceeded" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.170735 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.170946 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.670915904 +0000 UTC m=+165.530779217 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.171257 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.171577 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.671564862 +0000 UTC m=+165.531428175 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.210658 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.272035 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.272400 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.772385497 +0000 UTC m=+165.632248810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.277633 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvzvj" event={"ID":"611ec549-a1d2-4b9a-b67e-2216ec21327a","Type":"ContainerStarted","Data":"93f2714411df2c91bcbd9c4f933403f58dcbf74e496611a726b46b599f6b14b2"} Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.278691 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d90e3835-f259-4c44-b968-a7d90ef7dda5","Type":"ContainerStarted","Data":"a2c96ea60c261242636fce03f25087e1e5ac0cc279e8a943e5e778031341f98e"} Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.374033 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.374425 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.874412434 +0000 UTC m=+165.734275737 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.475661 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.475843 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.975816953 +0000 UTC m=+165.835680266 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.475998 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.476071 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.476644 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:32.976616095 +0000 UTC m=+165.836479448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.480461 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2a55e871-062f-43fd-a1e2-b2296474f4f3-metrics-certs\") pod \"network-metrics-daemon-xh6zv\" (UID: \"2a55e871-062f-43fd-a1e2-b2296474f4f3\") " pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.499844 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:32 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:32 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:32 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.499905 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.577149 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.577518 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.077503481 +0000 UTC m=+165.937366784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.678269 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.678652 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.178639934 +0000 UTC m=+166.038503247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.743258 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xh6zv" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.778804 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.778959 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.278939484 +0000 UTC m=+166.138802797 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.778987 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.779347 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.279337035 +0000 UTC m=+166.139200348 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.831592 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8wm4\" (UniqueName: \"kubernetes.io/projected/5669b14f-d850-4cfc-a7ae-18f880dbccb5-kube-api-access-m8wm4\") pod \"redhat-operators-whqdv\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:32 crc kubenswrapper[4746]: W1211 09:56:32.860204 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ea31c3c_c4a5_47be_bddf_49b679f030d6.slice/crio-2e685609e2a0c86a28a550eae6c08d111501f4799647a2b75c7e87362fa502f6 WatchSource:0}: Error finding container 2e685609e2a0c86a28a550eae6c08d111501f4799647a2b75c7e87362fa502f6: Status 404 returned error can't find the container with id 2e685609e2a0c86a28a550eae6c08d111501f4799647a2b75c7e87362fa502f6 Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.880014 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.880156 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.380136929 +0000 UTC m=+166.240000242 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.880328 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.880628 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.380620001 +0000 UTC m=+166.240483324 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.948750 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.956947 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-w96w4" Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.990169 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.990313 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.490267783 +0000 UTC m=+166.350131096 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:32 crc kubenswrapper[4746]: I1211 09:56:32.990892 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:32 crc kubenswrapper[4746]: E1211 09:56:32.992412 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.492396121 +0000 UTC m=+166.352259674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.011545 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.099280 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:33 crc kubenswrapper[4746]: E1211 09:56:33.099564 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.599530925 +0000 UTC m=+166.459394238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.201280 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:33 crc kubenswrapper[4746]: E1211 09:56:33.202497 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.702472597 +0000 UTC m=+166.562335920 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.379838 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:33 crc kubenswrapper[4746]: E1211 09:56:33.379958 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.879935385 +0000 UTC m=+166.739798698 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.380025 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:33 crc kubenswrapper[4746]: E1211 09:56:33.380395 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.880386797 +0000 UTC m=+166.740250110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.410443 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9dz" event={"ID":"7ea31c3c-c4a5-47be-bddf-49b679f030d6","Type":"ContainerStarted","Data":"2e685609e2a0c86a28a550eae6c08d111501f4799647a2b75c7e87362fa502f6"} Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.453372 4746 generic.go:334] "Generic (PLEG): container finished" podID="d9c295d4-1896-4e5e-989e-2d0a3eb9b07e" containerID="be661d318b8360bf8675bf87e0885727febb81c4d6a663a4d5a6e0a298e257bc" exitCode=0 Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.453462 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" event={"ID":"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e","Type":"ContainerDied","Data":"be661d318b8360bf8675bf87e0885727febb81c4d6a663a4d5a6e0a298e257bc"} Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.456794 4746 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.462969 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d90e3835-f259-4c44-b968-a7d90ef7dda5","Type":"ContainerStarted","Data":"74a5c23a001a836f2cb5d5a3118493f295865b619ca0657df60fc6d8f35aff14"} Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.476169 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" event={"ID":"9dd7dcf7-5174-44fb-b164-38de3c8788ad","Type":"ContainerStarted","Data":"06cb0b53f0473c6b9f9730826fe97516dbf305165dbaf62463c77982a6d653a1"} Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.478583 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkx2w" event={"ID":"d5155500-7423-4b61-8724-8172a012cd8a","Type":"ContainerStarted","Data":"9a6b779ea57745070c483d57aaa6493fad5d31f81d1dba8f27233bab4f1a0b7b"} Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.480272 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.481840 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:33 crc kubenswrapper[4746]: E1211 09:56:33.482511 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:33.982487536 +0000 UTC m=+166.842350839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.484180 4746 generic.go:334] "Generic (PLEG): container finished" podID="9be0459f-f161-4203-868a-ba2d577c96d1" containerID="55a4e7f1e05a46ffa7d1d32b549da0f8040f119fe45aa30e84926e8c1687d0fb" exitCode=0 Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.484287 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pldtc" event={"ID":"9be0459f-f161-4203-868a-ba2d577c96d1","Type":"ContainerDied","Data":"55a4e7f1e05a46ffa7d1d32b549da0f8040f119fe45aa30e84926e8c1687d0fb"} Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.488285 4746 generic.go:334] "Generic (PLEG): container finished" podID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" containerID="8fe538cc9f6f9bda884a3906a8dcfc213fd9a768d915c3b587d251c1d1095578" exitCode=0 Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.489809 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq5fs" event={"ID":"492f0d06-a8d7-4f81-a86a-5f1caea059cd","Type":"ContainerDied","Data":"8fe538cc9f6f9bda884a3906a8dcfc213fd9a768d915c3b587d251c1d1095578"} Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.585294 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:33 crc kubenswrapper[4746]: E1211 09:56:33.585746 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:34.085728096 +0000 UTC m=+166.945591409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.613285 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:33 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:33 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:33 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.613357 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.688565 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:33 crc kubenswrapper[4746]: E1211 09:56:33.689901 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:34.189873919 +0000 UTC m=+167.049737232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.738710 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.738689213 podStartE2EDuration="3.738689213s" podCreationTimestamp="2025-12-11 09:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:33.726447984 +0000 UTC m=+166.586311317" watchObservedRunningTime="2025-12-11 09:56:33.738689213 +0000 UTC m=+166.598552526" Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.791492 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:33 crc kubenswrapper[4746]: E1211 09:56:33.792156 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:34.292138482 +0000 UTC m=+167.152001795 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:33 crc kubenswrapper[4746]: I1211 09:56:33.893463 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:33 crc kubenswrapper[4746]: E1211 09:56:33.893959 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-11 09:56:34.393938243 +0000 UTC m=+167.253801556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.000027 4746 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-11T09:56:33.456825965Z","Handler":null,"Name":""} Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.019273 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:34 crc kubenswrapper[4746]: E1211 09:56:34.019885 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-11 09:56:34.519868404 +0000 UTC m=+167.379731717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-9qmb2" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.023515 4746 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.023567 4746 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.041431 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bgg6h"] Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.120792 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.135661 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.224355 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.227577 4746 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.227624 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.243598 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xh6zv"] Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.303860 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-9qmb2\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.500004 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:34 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:34 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:34 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.500067 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.505984 4746 generic.go:334] "Generic (PLEG): container finished" podID="611ec549-a1d2-4b9a-b67e-2216ec21327a" containerID="4072a9ff8418162d5c049da6482babdfc97069891f207a32cafa98f1b1836121" exitCode=0 Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.506062 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvzvj" event={"ID":"611ec549-a1d2-4b9a-b67e-2216ec21327a","Type":"ContainerDied","Data":"4072a9ff8418162d5c049da6482babdfc97069891f207a32cafa98f1b1836121"} Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.507578 4746 generic.go:334] "Generic (PLEG): container finished" podID="feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" containerID="9229aaf0fc8b38407e09a655c5e4ad591b27d63d803d7ede5dc8f508bb7af7f5" exitCode=0 Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.507646 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7kv7" event={"ID":"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7","Type":"ContainerDied","Data":"9229aaf0fc8b38407e09a655c5e4ad591b27d63d803d7ede5dc8f508bb7af7f5"} Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.511272 4746 generic.go:334] "Generic (PLEG): container finished" podID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" containerID="04fda1472b21ce93cb61e3b765b899279d11dc37f7ed4fa433adf55958375df5" exitCode=0 Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.511325 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9dz" event={"ID":"7ea31c3c-c4a5-47be-bddf-49b679f030d6","Type":"ContainerDied","Data":"04fda1472b21ce93cb61e3b765b899279d11dc37f7ed4fa433adf55958375df5"} Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.515517 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" event={"ID":"2a55e871-062f-43fd-a1e2-b2296474f4f3","Type":"ContainerStarted","Data":"b5e9686bd4b253bfb137388ad18bf93b5cf5699ddad5818ce305ea6c9cc09257"} Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.521449 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec","Type":"ContainerDied","Data":"586c3d83c46ac14020ce65302909804f95331da1b7664dbad2aada5de5c60eda"} Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.521530 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="586c3d83c46ac14020ce65302909804f95331da1b7664dbad2aada5de5c60eda" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.531165 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgg6h" event={"ID":"cccdab48-aeee-44f8-aebc-90129170ea8a","Type":"ContainerStarted","Data":"9ca43008fd1a6b7b40701f7f8f405c46301828c06c009e55f36a0210eb3bce98"} Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.539125 4746 generic.go:334] "Generic (PLEG): container finished" podID="d5155500-7423-4b61-8724-8172a012cd8a" containerID="9a6b779ea57745070c483d57aaa6493fad5d31f81d1dba8f27233bab4f1a0b7b" exitCode=0 Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.539211 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkx2w" event={"ID":"d5155500-7423-4b61-8724-8172a012cd8a","Type":"ContainerDied","Data":"9a6b779ea57745070c483d57aaa6493fad5d31f81d1dba8f27233bab4f1a0b7b"} Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.560907 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.723414 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.731776 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-whqdv"] Dec 11 09:56:34 crc kubenswrapper[4746]: W1211 09:56:34.749022 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5669b14f_d850_4cfc_a7ae_18f880dbccb5.slice/crio-454977cf9a60c17fca4318cde0a5b65bb22c989fc70f656b386e6f5f88d357b5 WatchSource:0}: Error finding container 454977cf9a60c17fca4318cde0a5b65bb22c989fc70f656b386e6f5f88d357b5: Status 404 returned error can't find the container with id 454977cf9a60c17fca4318cde0a5b65bb22c989fc70f656b386e6f5f88d357b5 Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.776704 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kube-api-access\") pod \"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec\" (UID: \"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec\") " Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.777024 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kubelet-dir\") pod \"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec\" (UID: \"7eec3d9f-436e-4d57-8bed-e1ed3bf328ec\") " Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.777345 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7eec3d9f-436e-4d57-8bed-e1ed3bf328ec" (UID: "7eec3d9f-436e-4d57-8bed-e1ed3bf328ec"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.833116 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7eec3d9f-436e-4d57-8bed-e1ed3bf328ec" (UID: "7eec3d9f-436e-4d57-8bed-e1ed3bf328ec"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.884774 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 09:56:34 crc kubenswrapper[4746]: I1211 09:56:34.884815 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7eec3d9f-436e-4d57-8bed-e1ed3bf328ec-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 09:56:35 crc kubenswrapper[4746]: I1211 09:56:35.453833 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9qmb2"] Dec 11 09:56:35 crc kubenswrapper[4746]: I1211 09:56:35.500826 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:35 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:35 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:35 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:35 crc kubenswrapper[4746]: I1211 09:56:35.500877 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:35 crc kubenswrapper[4746]: I1211 09:56:35.545484 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 11 09:56:35 crc kubenswrapper[4746]: I1211 09:56:35.548409 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whqdv" event={"ID":"5669b14f-d850-4cfc-a7ae-18f880dbccb5","Type":"ContainerStarted","Data":"454977cf9a60c17fca4318cde0a5b65bb22c989fc70f656b386e6f5f88d357b5"} Dec 11 09:56:35 crc kubenswrapper[4746]: I1211 09:56:35.642845 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 11 09:56:35 crc kubenswrapper[4746]: I1211 09:56:35.664749 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-952k5" Dec 11 09:56:35 crc kubenswrapper[4746]: I1211 09:56:35.815147 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:35.943705 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-secret-volume\") pod \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:35.943760 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-config-volume\") pod \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:35.943794 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz6gl\" (UniqueName: \"kubernetes.io/projected/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-kube-api-access-zz6gl\") pod \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\" (UID: \"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e\") " Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:35.966695 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-config-volume" (OuterVolumeSpecName: "config-volume") pod "d9c295d4-1896-4e5e-989e-2d0a3eb9b07e" (UID: "d9c295d4-1896-4e5e-989e-2d0a3eb9b07e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.048448 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.098055 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d9c295d4-1896-4e5e-989e-2d0a3eb9b07e" (UID: "d9c295d4-1896-4e5e-989e-2d0a3eb9b07e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.098148 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-kube-api-access-zz6gl" (OuterVolumeSpecName: "kube-api-access-zz6gl") pod "d9c295d4-1896-4e5e-989e-2d0a3eb9b07e" (UID: "d9c295d4-1896-4e5e-989e-2d0a3eb9b07e"). InnerVolumeSpecName "kube-api-access-zz6gl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.158087 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.158139 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz6gl\" (UniqueName: \"kubernetes.io/projected/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e-kube-api-access-zz6gl\") on node \"crc\" DevicePath \"\"" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.177596 4746 patch_prober.go:28] interesting pod/apiserver-76f77b778f-8b44g container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 11 09:56:36 crc kubenswrapper[4746]: [+]log ok Dec 11 09:56:36 crc kubenswrapper[4746]: [+]etcd ok Dec 11 09:56:36 crc kubenswrapper[4746]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 11 09:56:36 crc kubenswrapper[4746]: [+]poststarthook/generic-apiserver-start-informers ok Dec 11 09:56:36 crc kubenswrapper[4746]: [+]poststarthook/max-in-flight-filter ok Dec 11 09:56:36 crc kubenswrapper[4746]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 11 09:56:36 crc kubenswrapper[4746]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 11 09:56:36 crc kubenswrapper[4746]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 11 09:56:36 crc kubenswrapper[4746]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 11 09:56:36 crc kubenswrapper[4746]: [-]poststarthook/project.openshift.io-projectcache failed: reason withheld Dec 11 09:56:36 crc kubenswrapper[4746]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 11 09:56:36 crc kubenswrapper[4746]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Dec 11 09:56:36 crc kubenswrapper[4746]: [-]poststarthook/openshift.io-restmapperupdater failed: reason withheld Dec 11 09:56:36 crc kubenswrapper[4746]: [-]poststarthook/quota.openshift.io-clusterquotamapping failed: reason withheld Dec 11 09:56:36 crc kubenswrapper[4746]: livez check failed Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.177660 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" podUID="97b454ad-ff7c-4c7b-9d53-79b92b7520de" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.504572 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:36 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:36 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:36 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.504696 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.551911 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" event={"ID":"d9c295d4-1896-4e5e-989e-2d0a3eb9b07e","Type":"ContainerDied","Data":"1923c28a18f89b8fa90dcbd554d5fc9660521905f18f91e172d1da1d63990ea9"} Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.551966 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1923c28a18f89b8fa90dcbd554d5fc9660521905f18f91e172d1da1d63990ea9" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.552234 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h" Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.554535 4746 generic.go:334] "Generic (PLEG): container finished" podID="d90e3835-f259-4c44-b968-a7d90ef7dda5" containerID="74a5c23a001a836f2cb5d5a3118493f295865b619ca0657df60fc6d8f35aff14" exitCode=0 Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.554635 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d90e3835-f259-4c44-b968-a7d90ef7dda5","Type":"ContainerDied","Data":"74a5c23a001a836f2cb5d5a3118493f295865b619ca0657df60fc6d8f35aff14"} Dec 11 09:56:36 crc kubenswrapper[4746]: I1211 09:56:36.555874 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" event={"ID":"d9d880ef-9ac5-4686-bf49-77406ca35135","Type":"ContainerStarted","Data":"04436326e4e6babc725d7d217a5fc8e107b577671a47f22ac5528cb304643680"} Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.517317 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:37 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:37 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:37 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.517372 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.570579 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" event={"ID":"2a55e871-062f-43fd-a1e2-b2296474f4f3","Type":"ContainerStarted","Data":"2ef7f5abd205ff414365870da0b55b3167825d8f597a9890c0c28f42f70ba532"} Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.575470 4746 generic.go:334] "Generic (PLEG): container finished" podID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerID="6cacebf26909dd9bee5ec6ccbfe8106bee5365fd590be6e8cb861abf5ee457ce" exitCode=0 Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.575542 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgg6h" event={"ID":"cccdab48-aeee-44f8-aebc-90129170ea8a","Type":"ContainerDied","Data":"6cacebf26909dd9bee5ec6ccbfe8106bee5365fd590be6e8cb861abf5ee457ce"} Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.580148 4746 generic.go:334] "Generic (PLEG): container finished" podID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" containerID="18c578b4f4bf1ee1b34f858656297eeb82a22501422f2615499ad07d079de3c1" exitCode=0 Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.580295 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whqdv" event={"ID":"5669b14f-d850-4cfc-a7ae-18f880dbccb5","Type":"ContainerDied","Data":"18c578b4f4bf1ee1b34f858656297eeb82a22501422f2615499ad07d079de3c1"} Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.599574 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" event={"ID":"9dd7dcf7-5174-44fb-b164-38de3c8788ad","Type":"ContainerStarted","Data":"5ddc9501ca8a21b76f2156d7f69a71b288b1e657c476c6a1f95b70e067032359"} Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.629628 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" event={"ID":"d9d880ef-9ac5-4686-bf49-77406ca35135","Type":"ContainerStarted","Data":"da766278b2272b5480fe84d156e96d5caff9d1f205a50852d11b90d860597cf3"} Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.629675 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.750619 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-m8w9j" podStartSLOduration=20.750597006 podStartE2EDuration="20.750597006s" podCreationTimestamp="2025-12-11 09:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:37.74850537 +0000 UTC m=+170.608368683" watchObservedRunningTime="2025-12-11 09:56:37.750597006 +0000 UTC m=+170.610460319" Dec 11 09:56:37 crc kubenswrapper[4746]: I1211 09:56:37.847236 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" podStartSLOduration=147.847213267 podStartE2EDuration="2m27.847213267s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:37.836412057 +0000 UTC m=+170.696275370" watchObservedRunningTime="2025-12-11 09:56:37.847213267 +0000 UTC m=+170.707076600" Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.502570 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:38 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:38 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:38 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.502899 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.639668 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.643122 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xh6zv" event={"ID":"2a55e871-062f-43fd-a1e2-b2296474f4f3","Type":"ContainerStarted","Data":"78fe5aaf04be74fb20cd137f44ba2badb202c41f2cd3f119bf02fa2a831f8e83"} Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.652118 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.652297 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"d90e3835-f259-4c44-b968-a7d90ef7dda5","Type":"ContainerDied","Data":"a2c96ea60c261242636fce03f25087e1e5ac0cc279e8a943e5e778031341f98e"} Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.652317 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c96ea60c261242636fce03f25087e1e5ac0cc279e8a943e5e778031341f98e" Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.701951 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xh6zv" podStartSLOduration=148.701932499 podStartE2EDuration="2m28.701932499s" podCreationTimestamp="2025-12-11 09:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:56:38.693279697 +0000 UTC m=+171.553143010" watchObservedRunningTime="2025-12-11 09:56:38.701932499 +0000 UTC m=+171.561795812" Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.761133 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d90e3835-f259-4c44-b968-a7d90ef7dda5-kube-api-access\") pod \"d90e3835-f259-4c44-b968-a7d90ef7dda5\" (UID: \"d90e3835-f259-4c44-b968-a7d90ef7dda5\") " Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.761173 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d90e3835-f259-4c44-b968-a7d90ef7dda5-kubelet-dir\") pod \"d90e3835-f259-4c44-b968-a7d90ef7dda5\" (UID: \"d90e3835-f259-4c44-b968-a7d90ef7dda5\") " Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.763360 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d90e3835-f259-4c44-b968-a7d90ef7dda5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "d90e3835-f259-4c44-b968-a7d90ef7dda5" (UID: "d90e3835-f259-4c44-b968-a7d90ef7dda5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.798676 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d90e3835-f259-4c44-b968-a7d90ef7dda5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d90e3835-f259-4c44-b968-a7d90ef7dda5" (UID: "d90e3835-f259-4c44-b968-a7d90ef7dda5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.880421 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d90e3835-f259-4c44-b968-a7d90ef7dda5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 09:56:38 crc kubenswrapper[4746]: I1211 09:56:38.880790 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d90e3835-f259-4c44-b968-a7d90ef7dda5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 09:56:39 crc kubenswrapper[4746]: I1211 09:56:39.507960 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:39 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:39 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:39 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:39 crc kubenswrapper[4746]: I1211 09:56:39.508011 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:40 crc kubenswrapper[4746]: I1211 09:56:40.009560 4746 patch_prober.go:28] interesting pod/console-f9d7485db-4n677 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 11 09:56:40 crc kubenswrapper[4746]: I1211 09:56:40.009607 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-4n677" podUID="2d6e68f4-a35b-43d1-b1fb-95600add4933" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 11 09:56:40 crc kubenswrapper[4746]: I1211 09:56:40.112743 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:56:40 crc kubenswrapper[4746]: I1211 09:56:40.112813 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:56:40 crc kubenswrapper[4746]: I1211 09:56:40.116056 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:56:40 crc kubenswrapper[4746]: I1211 09:56:40.116101 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:56:40 crc kubenswrapper[4746]: I1211 09:56:40.415538 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:40 crc kubenswrapper[4746]: I1211 09:56:40.421989 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-8b44g" Dec 11 09:56:40 crc kubenswrapper[4746]: I1211 09:56:40.533327 4746 patch_prober.go:28] interesting pod/router-default-5444994796-5bgx9 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 11 09:56:40 crc kubenswrapper[4746]: [-]has-synced failed: reason withheld Dec 11 09:56:40 crc kubenswrapper[4746]: [+]process-running ok Dec 11 09:56:40 crc kubenswrapper[4746]: healthz check failed Dec 11 09:56:40 crc kubenswrapper[4746]: I1211 09:56:40.533394 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5bgx9" podUID="6890a664-38ec-4702-b9db-7bbc19fe5aae" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 11 09:56:41 crc kubenswrapper[4746]: I1211 09:56:41.123413 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-4hf8x" Dec 11 09:56:41 crc kubenswrapper[4746]: I1211 09:56:41.500751 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:41 crc kubenswrapper[4746]: I1211 09:56:41.508392 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5bgx9" Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.078787 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.079432 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.079520 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-nqbcs" Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.080603 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"bcfb119c6ca60f2f256406f80a4d614d7c19b115819c0aa9ea28c87b6f8b3ac0"} pod="openshift-console/downloads-7954f5f757-nqbcs" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.080750 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" containerID="cri-o://bcfb119c6ca60f2f256406f80a4d614d7c19b115819c0aa9ea28c87b6f8b3ac0" gracePeriod=2 Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.078819 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.080938 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.081342 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.081439 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.264812 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:50 crc kubenswrapper[4746]: I1211 09:56:50.272245 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-4n677" Dec 11 09:56:52 crc kubenswrapper[4746]: I1211 09:56:52.276305 4746 generic.go:334] "Generic (PLEG): container finished" podID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerID="bcfb119c6ca60f2f256406f80a4d614d7c19b115819c0aa9ea28c87b6f8b3ac0" exitCode=0 Dec 11 09:56:52 crc kubenswrapper[4746]: I1211 09:56:52.276415 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nqbcs" event={"ID":"2895076f-4e51-4f1c-ae8b-e8e9d1b8888d","Type":"ContainerDied","Data":"bcfb119c6ca60f2f256406f80a4d614d7c19b115819c0aa9ea28c87b6f8b3ac0"} Dec 11 09:56:54 crc kubenswrapper[4746]: I1211 09:56:54.571123 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:56:57 crc kubenswrapper[4746]: I1211 09:56:57.228474 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 11 09:56:59 crc kubenswrapper[4746]: I1211 09:56:59.877456 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:56:59 crc kubenswrapper[4746]: I1211 09:56:59.877817 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:57:00 crc kubenswrapper[4746]: I1211 09:57:00.139692 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:57:00 crc kubenswrapper[4746]: I1211 09:57:00.139785 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:57:00 crc kubenswrapper[4746]: I1211 09:57:00.567142 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-dhjm8" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.582772 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 09:57:05 crc kubenswrapper[4746]: E1211 09:57:05.583612 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9c295d4-1896-4e5e-989e-2d0a3eb9b07e" containerName="collect-profiles" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.583629 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9c295d4-1896-4e5e-989e-2d0a3eb9b07e" containerName="collect-profiles" Dec 11 09:57:05 crc kubenswrapper[4746]: E1211 09:57:05.583651 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eec3d9f-436e-4d57-8bed-e1ed3bf328ec" containerName="pruner" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.583660 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eec3d9f-436e-4d57-8bed-e1ed3bf328ec" containerName="pruner" Dec 11 09:57:05 crc kubenswrapper[4746]: E1211 09:57:05.583671 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d90e3835-f259-4c44-b968-a7d90ef7dda5" containerName="pruner" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.583681 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d90e3835-f259-4c44-b968-a7d90ef7dda5" containerName="pruner" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.583836 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eec3d9f-436e-4d57-8bed-e1ed3bf328ec" containerName="pruner" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.583858 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9c295d4-1896-4e5e-989e-2d0a3eb9b07e" containerName="collect-profiles" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.583868 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d90e3835-f259-4c44-b968-a7d90ef7dda5" containerName="pruner" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.584349 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.588665 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.588925 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.589655 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.710393 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b7089c6-6e57-4968-bda7-db0933422a31-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6b7089c6-6e57-4968-bda7-db0933422a31\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.710624 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b7089c6-6e57-4968-bda7-db0933422a31-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6b7089c6-6e57-4968-bda7-db0933422a31\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.811772 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b7089c6-6e57-4968-bda7-db0933422a31-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6b7089c6-6e57-4968-bda7-db0933422a31\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.811900 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b7089c6-6e57-4968-bda7-db0933422a31-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6b7089c6-6e57-4968-bda7-db0933422a31\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.812231 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b7089c6-6e57-4968-bda7-db0933422a31-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6b7089c6-6e57-4968-bda7-db0933422a31\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.840667 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b7089c6-6e57-4968-bda7-db0933422a31-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6b7089c6-6e57-4968-bda7-db0933422a31\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 09:57:05 crc kubenswrapper[4746]: I1211 09:57:05.924094 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.078707 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.079081 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.375327 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.376430 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.389287 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.601001 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.601130 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-var-lock\") pod \"installer-9-crc\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.601156 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1222305b-d227-4d6a-a76b-90a5ade7c176-kube-api-access\") pod \"installer-9-crc\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.754794 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-var-lock\") pod \"installer-9-crc\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.755395 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-var-lock\") pod \"installer-9-crc\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.754866 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1222305b-d227-4d6a-a76b-90a5ade7c176-kube-api-access\") pod \"installer-9-crc\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.756615 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.756690 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-kubelet-dir\") pod \"installer-9-crc\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:10 crc kubenswrapper[4746]: I1211 09:57:10.801454 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1222305b-d227-4d6a-a76b-90a5ade7c176-kube-api-access\") pod \"installer-9-crc\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:11 crc kubenswrapper[4746]: I1211 09:57:11.016769 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:57:20 crc kubenswrapper[4746]: I1211 09:57:20.080361 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:57:20 crc kubenswrapper[4746]: I1211 09:57:20.080972 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:57:29 crc kubenswrapper[4746]: I1211 09:57:29.877433 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:57:29 crc kubenswrapper[4746]: I1211 09:57:29.878095 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 09:57:29 crc kubenswrapper[4746]: I1211 09:57:29.878149 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 09:57:29 crc kubenswrapper[4746]: I1211 09:57:29.879080 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 09:57:29 crc kubenswrapper[4746]: I1211 09:57:29.879141 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5" gracePeriod=600 Dec 11 09:57:30 crc kubenswrapper[4746]: I1211 09:57:30.060752 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5" exitCode=0 Dec 11 09:57:30 crc kubenswrapper[4746]: I1211 09:57:30.060797 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5"} Dec 11 09:57:30 crc kubenswrapper[4746]: I1211 09:57:30.078606 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:57:30 crc kubenswrapper[4746]: I1211 09:57:30.078668 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:57:38 crc kubenswrapper[4746]: E1211 09:57:38.076076 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 09:57:38 crc kubenswrapper[4746]: E1211 09:57:38.076860 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lh759,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-bgg6h_openshift-marketplace(cccdab48-aeee-44f8-aebc-90129170ea8a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 09:57:38 crc kubenswrapper[4746]: E1211 09:57:38.078843 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-bgg6h" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" Dec 11 09:57:39 crc kubenswrapper[4746]: E1211 09:57:39.964324 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-bgg6h" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" Dec 11 09:57:40 crc kubenswrapper[4746]: E1211 09:57:40.035128 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 09:57:40 crc kubenswrapper[4746]: E1211 09:57:40.035387 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2bkdz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-pldtc_openshift-marketplace(9be0459f-f161-4203-868a-ba2d577c96d1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 09:57:40 crc kubenswrapper[4746]: E1211 09:57:40.036579 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-pldtc" podUID="9be0459f-f161-4203-868a-ba2d577c96d1" Dec 11 09:57:40 crc kubenswrapper[4746]: I1211 09:57:40.079496 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:57:40 crc kubenswrapper[4746]: I1211 09:57:40.079568 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:57:43 crc kubenswrapper[4746]: E1211 09:57:43.307354 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-pldtc" podUID="9be0459f-f161-4203-868a-ba2d577c96d1" Dec 11 09:57:46 crc kubenswrapper[4746]: E1211 09:57:46.954141 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 09:57:46 crc kubenswrapper[4746]: E1211 09:57:46.954690 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f67fc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-s7kv7_openshift-marketplace(feadbc9f-ff9b-47e3-bb8a-121af59a7ff7): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 09:57:46 crc kubenswrapper[4746]: E1211 09:57:46.955883 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-s7kv7" podUID="feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" Dec 11 09:57:46 crc kubenswrapper[4746]: E1211 09:57:46.973668 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 09:57:46 crc kubenswrapper[4746]: E1211 09:57:46.973807 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c5phx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-rkx2w_openshift-marketplace(d5155500-7423-4b61-8724-8172a012cd8a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 09:57:46 crc kubenswrapper[4746]: E1211 09:57:46.975017 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-rkx2w" podUID="d5155500-7423-4b61-8724-8172a012cd8a" Dec 11 09:57:47 crc kubenswrapper[4746]: E1211 09:57:47.033595 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 11 09:57:47 crc kubenswrapper[4746]: E1211 09:57:47.033775 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8wm4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-whqdv_openshift-marketplace(5669b14f-d850-4cfc-a7ae-18f880dbccb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 09:57:47 crc kubenswrapper[4746]: E1211 09:57:47.034502 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 11 09:57:47 crc kubenswrapper[4746]: E1211 09:57:47.034644 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2ghz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vq5fs_openshift-marketplace(492f0d06-a8d7-4f81-a86a-5f1caea059cd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 09:57:47 crc kubenswrapper[4746]: E1211 09:57:47.035618 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-whqdv" podUID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" Dec 11 09:57:47 crc kubenswrapper[4746]: E1211 09:57:47.035726 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vq5fs" podUID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" Dec 11 09:57:49 crc kubenswrapper[4746]: E1211 09:57:49.515863 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-s7kv7" podUID="feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" Dec 11 09:57:49 crc kubenswrapper[4746]: E1211 09:57:49.516291 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vq5fs" podUID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" Dec 11 09:57:49 crc kubenswrapper[4746]: E1211 09:57:49.516335 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-whqdv" podUID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" Dec 11 09:57:49 crc kubenswrapper[4746]: E1211 09:57:49.516402 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-rkx2w" podUID="d5155500-7423-4b61-8724-8172a012cd8a" Dec 11 09:57:49 crc kubenswrapper[4746]: I1211 09:57:49.740353 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 11 09:57:50 crc kubenswrapper[4746]: I1211 09:57:50.079869 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:57:50 crc kubenswrapper[4746]: I1211 09:57:50.080386 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:57:50 crc kubenswrapper[4746]: I1211 09:57:50.242770 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1222305b-d227-4d6a-a76b-90a5ade7c176","Type":"ContainerStarted","Data":"d04d4367ce506d030a49e9b7d054dc23041b4956e17d134bd27f316f9fc21af4"} Dec 11 09:57:50 crc kubenswrapper[4746]: I1211 09:57:50.244886 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"2a42ad17d86d35ad64e581cff61d7570f6fe9c16ebe1b1b6377d0f2511611aed"} Dec 11 09:57:50 crc kubenswrapper[4746]: I1211 09:57:50.269406 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 11 09:57:50 crc kubenswrapper[4746]: E1211 09:57:50.545341 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 09:57:50 crc kubenswrapper[4746]: E1211 09:57:50.545733 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xghhn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-5p9dz_openshift-marketplace(7ea31c3c-c4a5-47be-bddf-49b679f030d6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 09:57:50 crc kubenswrapper[4746]: E1211 09:57:50.547520 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-5p9dz" podUID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" Dec 11 09:57:51 crc kubenswrapper[4746]: I1211 09:57:51.250118 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1222305b-d227-4d6a-a76b-90a5ade7c176","Type":"ContainerStarted","Data":"5a715be1478cee295d8391bb597f56a30cf0e77aca9c99928a9cf26299498ca5"} Dec 11 09:57:51 crc kubenswrapper[4746]: I1211 09:57:51.252183 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nqbcs" event={"ID":"2895076f-4e51-4f1c-ae8b-e8e9d1b8888d","Type":"ContainerStarted","Data":"9f721e899bf216675ffb546b2206426bfae0be28d0cc2fbe38fa4bca9d0b08b9"} Dec 11 09:57:51 crc kubenswrapper[4746]: I1211 09:57:51.252607 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-nqbcs" Dec 11 09:57:51 crc kubenswrapper[4746]: I1211 09:57:51.252610 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:57:51 crc kubenswrapper[4746]: I1211 09:57:51.252677 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:57:51 crc kubenswrapper[4746]: I1211 09:57:51.253758 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6b7089c6-6e57-4968-bda7-db0933422a31","Type":"ContainerStarted","Data":"7d4eb2f16f476605f3b6394409ff311cdb8652fdc98492dcf3af2f4d75fdbc30"} Dec 11 09:57:51 crc kubenswrapper[4746]: I1211 09:57:51.253780 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6b7089c6-6e57-4968-bda7-db0933422a31","Type":"ContainerStarted","Data":"ad9a3314eb120d4a4d565eb94fcab53b41022ae526779bbe817a3154ca906610"} Dec 11 09:57:51 crc kubenswrapper[4746]: E1211 09:57:51.255316 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-5p9dz" podUID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" Dec 11 09:57:51 crc kubenswrapper[4746]: I1211 09:57:51.287126 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=41.287112126 podStartE2EDuration="41.287112126s" podCreationTimestamp="2025-12-11 09:57:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:57:51.268285095 +0000 UTC m=+244.128148408" watchObservedRunningTime="2025-12-11 09:57:51.287112126 +0000 UTC m=+244.146975439" Dec 11 09:57:52 crc kubenswrapper[4746]: I1211 09:57:52.260724 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:57:52 crc kubenswrapper[4746]: I1211 09:57:52.261108 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:57:52 crc kubenswrapper[4746]: I1211 09:57:52.280415 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=47.280398419 podStartE2EDuration="47.280398419s" podCreationTimestamp="2025-12-11 09:57:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:57:52.27893772 +0000 UTC m=+245.138801033" watchObservedRunningTime="2025-12-11 09:57:52.280398419 +0000 UTC m=+245.140261732" Dec 11 09:57:52 crc kubenswrapper[4746]: E1211 09:57:52.687524 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 11 09:57:52 crc kubenswrapper[4746]: E1211 09:57:52.687727 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mw8xr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pvzvj_openshift-marketplace(611ec549-a1d2-4b9a-b67e-2216ec21327a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 09:57:52 crc kubenswrapper[4746]: E1211 09:57:52.688924 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pvzvj" podUID="611ec549-a1d2-4b9a-b67e-2216ec21327a" Dec 11 09:57:53 crc kubenswrapper[4746]: I1211 09:57:53.267116 4746 generic.go:334] "Generic (PLEG): container finished" podID="6b7089c6-6e57-4968-bda7-db0933422a31" containerID="7d4eb2f16f476605f3b6394409ff311cdb8652fdc98492dcf3af2f4d75fdbc30" exitCode=0 Dec 11 09:57:53 crc kubenswrapper[4746]: I1211 09:57:53.267380 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6b7089c6-6e57-4968-bda7-db0933422a31","Type":"ContainerDied","Data":"7d4eb2f16f476605f3b6394409ff311cdb8652fdc98492dcf3af2f4d75fdbc30"} Dec 11 09:57:53 crc kubenswrapper[4746]: I1211 09:57:53.268165 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:57:53 crc kubenswrapper[4746]: I1211 09:57:53.268210 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:57:53 crc kubenswrapper[4746]: E1211 09:57:53.268988 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pvzvj" podUID="611ec549-a1d2-4b9a-b67e-2216ec21327a" Dec 11 09:57:54 crc kubenswrapper[4746]: I1211 09:57:54.503208 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 09:57:54 crc kubenswrapper[4746]: I1211 09:57:54.650738 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b7089c6-6e57-4968-bda7-db0933422a31-kubelet-dir\") pod \"6b7089c6-6e57-4968-bda7-db0933422a31\" (UID: \"6b7089c6-6e57-4968-bda7-db0933422a31\") " Dec 11 09:57:54 crc kubenswrapper[4746]: I1211 09:57:54.650815 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b7089c6-6e57-4968-bda7-db0933422a31-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6b7089c6-6e57-4968-bda7-db0933422a31" (UID: "6b7089c6-6e57-4968-bda7-db0933422a31"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:57:54 crc kubenswrapper[4746]: I1211 09:57:54.651001 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b7089c6-6e57-4968-bda7-db0933422a31-kube-api-access\") pod \"6b7089c6-6e57-4968-bda7-db0933422a31\" (UID: \"6b7089c6-6e57-4968-bda7-db0933422a31\") " Dec 11 09:57:54 crc kubenswrapper[4746]: I1211 09:57:54.651544 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b7089c6-6e57-4968-bda7-db0933422a31-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 09:57:54 crc kubenswrapper[4746]: I1211 09:57:54.666281 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b7089c6-6e57-4968-bda7-db0933422a31-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6b7089c6-6e57-4968-bda7-db0933422a31" (UID: "6b7089c6-6e57-4968-bda7-db0933422a31"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:57:54 crc kubenswrapper[4746]: I1211 09:57:54.753089 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b7089c6-6e57-4968-bda7-db0933422a31-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 09:57:55 crc kubenswrapper[4746]: I1211 09:57:55.279474 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"6b7089c6-6e57-4968-bda7-db0933422a31","Type":"ContainerDied","Data":"ad9a3314eb120d4a4d565eb94fcab53b41022ae526779bbe817a3154ca906610"} Dec 11 09:57:55 crc kubenswrapper[4746]: I1211 09:57:55.279731 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad9a3314eb120d4a4d565eb94fcab53b41022ae526779bbe817a3154ca906610" Dec 11 09:57:55 crc kubenswrapper[4746]: I1211 09:57:55.279520 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.579402 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jdpcb"] Dec 11 09:57:57 crc kubenswrapper[4746]: E1211 09:57:57.579899 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b7089c6-6e57-4968-bda7-db0933422a31" containerName="pruner" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.579910 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b7089c6-6e57-4968-bda7-db0933422a31" containerName="pruner" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.580001 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b7089c6-6e57-4968-bda7-db0933422a31" containerName="pruner" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.580429 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.591066 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jdpcb"] Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.703434 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83fd43df-919e-403d-9508-c4cdab23269b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.703503 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83fd43df-919e-403d-9508-c4cdab23269b-trusted-ca\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.703533 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xphrt\" (UniqueName: \"kubernetes.io/projected/83fd43df-919e-403d-9508-c4cdab23269b-kube-api-access-xphrt\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.703673 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83fd43df-919e-403d-9508-c4cdab23269b-registry-tls\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.703777 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83fd43df-919e-403d-9508-c4cdab23269b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.703920 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.704068 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83fd43df-919e-403d-9508-c4cdab23269b-registry-certificates\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.704107 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83fd43df-919e-403d-9508-c4cdab23269b-bound-sa-token\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.727953 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.805918 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83fd43df-919e-403d-9508-c4cdab23269b-registry-certificates\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.806000 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83fd43df-919e-403d-9508-c4cdab23269b-bound-sa-token\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.806326 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83fd43df-919e-403d-9508-c4cdab23269b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.806395 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83fd43df-919e-403d-9508-c4cdab23269b-trusted-ca\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.806432 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xphrt\" (UniqueName: \"kubernetes.io/projected/83fd43df-919e-403d-9508-c4cdab23269b-kube-api-access-xphrt\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.806490 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83fd43df-919e-403d-9508-c4cdab23269b-registry-tls\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.806526 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83fd43df-919e-403d-9508-c4cdab23269b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.808525 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/83fd43df-919e-403d-9508-c4cdab23269b-trusted-ca\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.808649 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/83fd43df-919e-403d-9508-c4cdab23269b-registry-certificates\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.808982 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/83fd43df-919e-403d-9508-c4cdab23269b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.814924 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/83fd43df-919e-403d-9508-c4cdab23269b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.826469 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xphrt\" (UniqueName: \"kubernetes.io/projected/83fd43df-919e-403d-9508-c4cdab23269b-kube-api-access-xphrt\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.828275 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/83fd43df-919e-403d-9508-c4cdab23269b-registry-tls\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.829715 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/83fd43df-919e-403d-9508-c4cdab23269b-bound-sa-token\") pod \"image-registry-66df7c8f76-jdpcb\" (UID: \"83fd43df-919e-403d-9508-c4cdab23269b\") " pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:57:57 crc kubenswrapper[4746]: I1211 09:57:57.896959 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:58:00 crc kubenswrapper[4746]: I1211 09:58:00.079305 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:58:00 crc kubenswrapper[4746]: I1211 09:58:00.079360 4746 patch_prober.go:28] interesting pod/downloads-7954f5f757-nqbcs container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Dec 11 09:58:00 crc kubenswrapper[4746]: I1211 09:58:00.079653 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:58:00 crc kubenswrapper[4746]: I1211 09:58:00.079712 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nqbcs" podUID="2895076f-4e51-4f1c-ae8b-e8e9d1b8888d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Dec 11 09:58:00 crc kubenswrapper[4746]: I1211 09:58:00.948866 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jdpcb"] Dec 11 09:58:01 crc kubenswrapper[4746]: I1211 09:58:01.311468 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" event={"ID":"83fd43df-919e-403d-9508-c4cdab23269b","Type":"ContainerStarted","Data":"677d11c8dfe7b6904ce56080363ea6ab657b57fb92389c545f908c345afaba3a"} Dec 11 09:58:02 crc kubenswrapper[4746]: I1211 09:58:02.318603 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" event={"ID":"83fd43df-919e-403d-9508-c4cdab23269b","Type":"ContainerStarted","Data":"e5b0b33ccfa07ca591502300c6a66b497bd75a19953c398c4bfed7fa68dcac14"} Dec 11 09:58:02 crc kubenswrapper[4746]: I1211 09:58:02.320737 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:58:02 crc kubenswrapper[4746]: I1211 09:58:02.323842 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgg6h" event={"ID":"cccdab48-aeee-44f8-aebc-90129170ea8a","Type":"ContainerStarted","Data":"1375c6ce221f579ad2e661bb8efa52158186596f09fb5cf692a7a63d02699ca2"} Dec 11 09:58:02 crc kubenswrapper[4746]: I1211 09:58:02.344031 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" podStartSLOduration=5.344012974 podStartE2EDuration="5.344012974s" podCreationTimestamp="2025-12-11 09:57:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:58:02.338177866 +0000 UTC m=+255.198041179" watchObservedRunningTime="2025-12-11 09:58:02.344012974 +0000 UTC m=+255.203876287" Dec 11 09:58:04 crc kubenswrapper[4746]: I1211 09:58:04.340305 4746 generic.go:334] "Generic (PLEG): container finished" podID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerID="1375c6ce221f579ad2e661bb8efa52158186596f09fb5cf692a7a63d02699ca2" exitCode=0 Dec 11 09:58:04 crc kubenswrapper[4746]: I1211 09:58:04.341104 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgg6h" event={"ID":"cccdab48-aeee-44f8-aebc-90129170ea8a","Type":"ContainerDied","Data":"1375c6ce221f579ad2e661bb8efa52158186596f09fb5cf692a7a63d02699ca2"} Dec 11 09:58:07 crc kubenswrapper[4746]: I1211 09:58:07.465355 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-czfmv"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.629465 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s7kv7"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.647843 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vq5fs"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.653871 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pldtc"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.665371 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkx2w"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.673826 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-krdwz"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.674013 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" podUID="09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" containerName="marketplace-operator" containerID="cri-o://bf6e87a740bd07f7a1e7dbaaed2fff8903b5b07bce374c480cdb74ffa3da3c8c" gracePeriod=30 Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.689615 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9dz"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.696874 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hfln2"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.697756 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.705237 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvzvj"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.710635 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hfln2"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.716908 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bgg6h"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.723606 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whqdv"] Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.772924 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5m4\" (UniqueName: \"kubernetes.io/projected/71b25ce7-0542-4bbf-a7c7-ae760345ede3-kube-api-access-dw5m4\") pod \"marketplace-operator-79b997595-hfln2\" (UID: \"71b25ce7-0542-4bbf-a7c7-ae760345ede3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.773015 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71b25ce7-0542-4bbf-a7c7-ae760345ede3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hfln2\" (UID: \"71b25ce7-0542-4bbf-a7c7-ae760345ede3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.773094 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/71b25ce7-0542-4bbf-a7c7-ae760345ede3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hfln2\" (UID: \"71b25ce7-0542-4bbf-a7c7-ae760345ede3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.874018 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw5m4\" (UniqueName: \"kubernetes.io/projected/71b25ce7-0542-4bbf-a7c7-ae760345ede3-kube-api-access-dw5m4\") pod \"marketplace-operator-79b997595-hfln2\" (UID: \"71b25ce7-0542-4bbf-a7c7-ae760345ede3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.874128 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71b25ce7-0542-4bbf-a7c7-ae760345ede3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hfln2\" (UID: \"71b25ce7-0542-4bbf-a7c7-ae760345ede3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.874227 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/71b25ce7-0542-4bbf-a7c7-ae760345ede3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hfln2\" (UID: \"71b25ce7-0542-4bbf-a7c7-ae760345ede3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.876327 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/71b25ce7-0542-4bbf-a7c7-ae760345ede3-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hfln2\" (UID: \"71b25ce7-0542-4bbf-a7c7-ae760345ede3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.886919 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/71b25ce7-0542-4bbf-a7c7-ae760345ede3-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hfln2\" (UID: \"71b25ce7-0542-4bbf-a7c7-ae760345ede3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:08 crc kubenswrapper[4746]: I1211 09:58:08.890748 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw5m4\" (UniqueName: \"kubernetes.io/projected/71b25ce7-0542-4bbf-a7c7-ae760345ede3-kube-api-access-dw5m4\") pod \"marketplace-operator-79b997595-hfln2\" (UID: \"71b25ce7-0542-4bbf-a7c7-ae760345ede3\") " pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:09 crc kubenswrapper[4746]: I1211 09:58:09.018636 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.096330 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-nqbcs" Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.378566 4746 generic.go:334] "Generic (PLEG): container finished" podID="09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" containerID="bf6e87a740bd07f7a1e7dbaaed2fff8903b5b07bce374c480cdb74ffa3da3c8c" exitCode=0 Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.378722 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" event={"ID":"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6","Type":"ContainerDied","Data":"bf6e87a740bd07f7a1e7dbaaed2fff8903b5b07bce374c480cdb74ffa3da3c8c"} Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.592069 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.679390 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hfln2"] Dec 11 09:58:10 crc kubenswrapper[4746]: W1211 09:58:10.685872 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71b25ce7_0542_4bbf_a7c7_ae760345ede3.slice/crio-8f7ec54c035ad4039afdd128e07de408e2b276603afda94d62303776a87ae56d WatchSource:0}: Error finding container 8f7ec54c035ad4039afdd128e07de408e2b276603afda94d62303776a87ae56d: Status 404 returned error can't find the container with id 8f7ec54c035ad4039afdd128e07de408e2b276603afda94d62303776a87ae56d Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.716312 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-operator-metrics\") pod \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.716396 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmxmz\" (UniqueName: \"kubernetes.io/projected/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-kube-api-access-gmxmz\") pod \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.716497 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-trusted-ca\") pod \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\" (UID: \"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6\") " Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.717545 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" (UID: "09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.722889 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" (UID: "09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.724341 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-kube-api-access-gmxmz" (OuterVolumeSpecName: "kube-api-access-gmxmz") pod "09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" (UID: "09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6"). InnerVolumeSpecName "kube-api-access-gmxmz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.818264 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.818300 4746 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:10 crc kubenswrapper[4746]: I1211 09:58:10.818310 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmxmz\" (UniqueName: \"kubernetes.io/projected/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6-kube-api-access-gmxmz\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.388056 4746 generic.go:334] "Generic (PLEG): container finished" podID="9be0459f-f161-4203-868a-ba2d577c96d1" containerID="fb3740865b8bed6bc6334da7c9ce6f0d06f4ed7758cae611219c67c7061cb5ec" exitCode=0 Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.388256 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pldtc" event={"ID":"9be0459f-f161-4203-868a-ba2d577c96d1","Type":"ContainerDied","Data":"fb3740865b8bed6bc6334da7c9ce6f0d06f4ed7758cae611219c67c7061cb5ec"} Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.391660 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" event={"ID":"71b25ce7-0542-4bbf-a7c7-ae760345ede3","Type":"ContainerStarted","Data":"cc526a9a0106d332ef44cae57384b0ea3faa64b3235a4e58cc5345770785002a"} Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.391717 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" event={"ID":"71b25ce7-0542-4bbf-a7c7-ae760345ede3","Type":"ContainerStarted","Data":"8f7ec54c035ad4039afdd128e07de408e2b276603afda94d62303776a87ae56d"} Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.392083 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.393884 4746 generic.go:334] "Generic (PLEG): container finished" podID="feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" containerID="7ef9a2a0937b3d17348cc769ab1faab93142fd0284f4071d1adbfcd62c2c1a7f" exitCode=0 Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.394101 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7kv7" event={"ID":"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7","Type":"ContainerDied","Data":"7ef9a2a0937b3d17348cc769ab1faab93142fd0284f4071d1adbfcd62c2c1a7f"} Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.399217 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.399411 4746 generic.go:334] "Generic (PLEG): container finished" podID="d5155500-7423-4b61-8724-8172a012cd8a" containerID="85c296d770087fc4ae33c602f6b5c2e3119130081ca057fc985891b1a93428a3" exitCode=0 Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.399491 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkx2w" event={"ID":"d5155500-7423-4b61-8724-8172a012cd8a","Type":"ContainerDied","Data":"85c296d770087fc4ae33c602f6b5c2e3119130081ca057fc985891b1a93428a3"} Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.401592 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" event={"ID":"09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6","Type":"ContainerDied","Data":"f2b4b13db9a44d2b3c29481de915df800378fc11cdd45fbebab590643b5f130b"} Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.401642 4746 scope.go:117] "RemoveContainer" containerID="bf6e87a740bd07f7a1e7dbaaed2fff8903b5b07bce374c480cdb74ffa3da3c8c" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.401734 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.525086 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" podStartSLOduration=3.5250635839999998 podStartE2EDuration="3.525063584s" podCreationTimestamp="2025-12-11 09:58:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:58:11.522660779 +0000 UTC m=+264.382524102" watchObservedRunningTime="2025-12-11 09:58:11.525063584 +0000 UTC m=+264.384926897" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.543178 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-krdwz"] Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.543229 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-krdwz"] Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.584133 4746 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-krdwz container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.584207 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-krdwz" podUID="09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.640662 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" path="/var/lib/kubelet/pods/09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6/volumes" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.702874 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.765476 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.769506 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.838369 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5phx\" (UniqueName: \"kubernetes.io/projected/d5155500-7423-4b61-8724-8172a012cd8a-kube-api-access-c5phx\") pod \"d5155500-7423-4b61-8724-8172a012cd8a\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.838421 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bkdz\" (UniqueName: \"kubernetes.io/projected/9be0459f-f161-4203-868a-ba2d577c96d1-kube-api-access-2bkdz\") pod \"9be0459f-f161-4203-868a-ba2d577c96d1\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.838461 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-utilities\") pod \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.838490 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-catalog-content\") pod \"d5155500-7423-4b61-8724-8172a012cd8a\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.838584 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f67fc\" (UniqueName: \"kubernetes.io/projected/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-kube-api-access-f67fc\") pod \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.838643 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-utilities\") pod \"9be0459f-f161-4203-868a-ba2d577c96d1\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.839250 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-utilities\") pod \"d5155500-7423-4b61-8724-8172a012cd8a\" (UID: \"d5155500-7423-4b61-8724-8172a012cd8a\") " Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.839299 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-catalog-content\") pod \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\" (UID: \"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7\") " Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.839331 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-catalog-content\") pod \"9be0459f-f161-4203-868a-ba2d577c96d1\" (UID: \"9be0459f-f161-4203-868a-ba2d577c96d1\") " Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.839345 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-utilities" (OuterVolumeSpecName: "utilities") pod "9be0459f-f161-4203-868a-ba2d577c96d1" (UID: "9be0459f-f161-4203-868a-ba2d577c96d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.839452 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-utilities" (OuterVolumeSpecName: "utilities") pod "feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" (UID: "feadbc9f-ff9b-47e3-bb8a-121af59a7ff7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.839877 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.839904 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.839972 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-utilities" (OuterVolumeSpecName: "utilities") pod "d5155500-7423-4b61-8724-8172a012cd8a" (UID: "d5155500-7423-4b61-8724-8172a012cd8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.842537 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be0459f-f161-4203-868a-ba2d577c96d1-kube-api-access-2bkdz" (OuterVolumeSpecName: "kube-api-access-2bkdz") pod "9be0459f-f161-4203-868a-ba2d577c96d1" (UID: "9be0459f-f161-4203-868a-ba2d577c96d1"). InnerVolumeSpecName "kube-api-access-2bkdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.842718 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-kube-api-access-f67fc" (OuterVolumeSpecName: "kube-api-access-f67fc") pod "feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" (UID: "feadbc9f-ff9b-47e3-bb8a-121af59a7ff7"). InnerVolumeSpecName "kube-api-access-f67fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.844249 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5155500-7423-4b61-8724-8172a012cd8a-kube-api-access-c5phx" (OuterVolumeSpecName: "kube-api-access-c5phx") pod "d5155500-7423-4b61-8724-8172a012cd8a" (UID: "d5155500-7423-4b61-8724-8172a012cd8a"). InnerVolumeSpecName "kube-api-access-c5phx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.914986 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" (UID: "feadbc9f-ff9b-47e3-bb8a-121af59a7ff7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.916114 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5155500-7423-4b61-8724-8172a012cd8a" (UID: "d5155500-7423-4b61-8724-8172a012cd8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.934386 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9be0459f-f161-4203-868a-ba2d577c96d1" (UID: "9be0459f-f161-4203-868a-ba2d577c96d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.940836 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bkdz\" (UniqueName: \"kubernetes.io/projected/9be0459f-f161-4203-868a-ba2d577c96d1-kube-api-access-2bkdz\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.940890 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.940905 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f67fc\" (UniqueName: \"kubernetes.io/projected/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-kube-api-access-f67fc\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.940917 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5155500-7423-4b61-8724-8172a012cd8a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.940933 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.940945 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be0459f-f161-4203-868a-ba2d577c96d1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:11 crc kubenswrapper[4746]: I1211 09:58:11.940958 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5phx\" (UniqueName: \"kubernetes.io/projected/d5155500-7423-4b61-8724-8172a012cd8a-kube-api-access-c5phx\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.408944 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pvzvj" podUID="611ec549-a1d2-4b9a-b67e-2216ec21327a" containerName="extract-content" containerID="cri-o://adc9622de8f5389500cad58e6498d4cfd1caf524bb4badac7870746cd3d1779d" gracePeriod=30 Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.409014 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvzvj" event={"ID":"611ec549-a1d2-4b9a-b67e-2216ec21327a","Type":"ContainerStarted","Data":"adc9622de8f5389500cad58e6498d4cfd1caf524bb4badac7870746cd3d1779d"} Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.410859 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq5fs" event={"ID":"492f0d06-a8d7-4f81-a86a-5f1caea059cd","Type":"ContainerStarted","Data":"193d64641d157ad28ca0bb97b35fca5097ef7bde0d72be65a45c8f3a7cd9f0d0"} Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.411934 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vq5fs" podUID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" containerName="extract-content" containerID="cri-o://193d64641d157ad28ca0bb97b35fca5097ef7bde0d72be65a45c8f3a7cd9f0d0" gracePeriod=30 Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.420581 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rkx2w" event={"ID":"d5155500-7423-4b61-8724-8172a012cd8a","Type":"ContainerDied","Data":"af5c9bc7936d4d954939f787e6a3b4f9573e45e13006c0c7ae995eaca3d318f2"} Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.420634 4746 scope.go:117] "RemoveContainer" containerID="85c296d770087fc4ae33c602f6b5c2e3119130081ca057fc985891b1a93428a3" Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.420733 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rkx2w" Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.425265 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgg6h" event={"ID":"cccdab48-aeee-44f8-aebc-90129170ea8a","Type":"ContainerStarted","Data":"4b0bf9ef455186a9aa17f956618011b5a5837b35a1bae0e2178df84cebfe4b54"} Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.425422 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bgg6h" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerName="registry-server" containerID="cri-o://4b0bf9ef455186a9aa17f956618011b5a5837b35a1bae0e2178df84cebfe4b54" gracePeriod=30 Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.431764 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-s7kv7" event={"ID":"feadbc9f-ff9b-47e3-bb8a-121af59a7ff7","Type":"ContainerDied","Data":"c08b0720ec61b611df7c4e1c0b37df23b6a9bd3902c3260cfeb4a9bab5bec398"} Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.431865 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-s7kv7" Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.435590 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9dz" event={"ID":"7ea31c3c-c4a5-47be-bddf-49b679f030d6","Type":"ContainerStarted","Data":"2ce55d226ad22f07b5960a4f794636d3b6f45f68dc54db108fb98a751cd44675"} Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.435671 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5p9dz" podUID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" containerName="extract-content" containerID="cri-o://2ce55d226ad22f07b5960a4f794636d3b6f45f68dc54db108fb98a751cd44675" gracePeriod=30 Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.439616 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whqdv" event={"ID":"5669b14f-d850-4cfc-a7ae-18f880dbccb5","Type":"ContainerStarted","Data":"d1ea09cb45b85d1a3ebda15b1c13f817b0376d529a22edbd89ed6d49eb05af59"} Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.439885 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-whqdv" podUID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" containerName="extract-content" containerID="cri-o://d1ea09cb45b85d1a3ebda15b1c13f817b0376d529a22edbd89ed6d49eb05af59" gracePeriod=30 Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.445599 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pldtc" event={"ID":"9be0459f-f161-4203-868a-ba2d577c96d1","Type":"ContainerDied","Data":"2014e9a604626b16cf8f0a0ed2ca8cd2ff5a44629abec5b44dda4927dca4b0d2"} Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.445626 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pldtc" Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.475705 4746 scope.go:117] "RemoveContainer" containerID="9a6b779ea57745070c483d57aaa6493fad5d31f81d1dba8f27233bab4f1a0b7b" Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.492575 4746 scope.go:117] "RemoveContainer" containerID="7ef9a2a0937b3d17348cc769ab1faab93142fd0284f4071d1adbfcd62c2c1a7f" Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.518381 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bgg6h" podStartSLOduration=7.291181734 podStartE2EDuration="1m41.518365629s" podCreationTimestamp="2025-12-11 09:56:31 +0000 UTC" firstStartedPulling="2025-12-11 09:56:37.596002504 +0000 UTC m=+170.455865817" lastFinishedPulling="2025-12-11 09:58:11.823186399 +0000 UTC m=+264.683049712" observedRunningTime="2025-12-11 09:58:12.49848576 +0000 UTC m=+265.358349073" watchObservedRunningTime="2025-12-11 09:58:12.518365629 +0000 UTC m=+265.378228942" Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.559900 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rkx2w"] Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.562377 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rkx2w"] Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.581336 4746 scope.go:117] "RemoveContainer" containerID="9229aaf0fc8b38407e09a655c5e4ad591b27d63d803d7ede5dc8f508bb7af7f5" Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.583755 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-s7kv7"] Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.586880 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-s7kv7"] Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.608468 4746 scope.go:117] "RemoveContainer" containerID="fb3740865b8bed6bc6334da7c9ce6f0d06f4ed7758cae611219c67c7061cb5ec" Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.621769 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pldtc"] Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.626623 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pldtc"] Dec 11 09:58:12 crc kubenswrapper[4746]: I1211 09:58:12.626864 4746 scope.go:117] "RemoveContainer" containerID="55a4e7f1e05a46ffa7d1d32b549da0f8040f119fe45aa30e84926e8c1687d0fb" Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.470789 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pvzvj_611ec549-a1d2-4b9a-b67e-2216ec21327a/extract-content/0.log" Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.476094 4746 generic.go:334] "Generic (PLEG): container finished" podID="611ec549-a1d2-4b9a-b67e-2216ec21327a" containerID="adc9622de8f5389500cad58e6498d4cfd1caf524bb4badac7870746cd3d1779d" exitCode=2 Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.476246 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvzvj" event={"ID":"611ec549-a1d2-4b9a-b67e-2216ec21327a","Type":"ContainerDied","Data":"adc9622de8f5389500cad58e6498d4cfd1caf524bb4badac7870746cd3d1779d"} Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.477659 4746 generic.go:334] "Generic (PLEG): container finished" podID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" containerID="193d64641d157ad28ca0bb97b35fca5097ef7bde0d72be65a45c8f3a7cd9f0d0" exitCode=0 Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.477717 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq5fs" event={"ID":"492f0d06-a8d7-4f81-a86a-5f1caea059cd","Type":"ContainerDied","Data":"193d64641d157ad28ca0bb97b35fca5097ef7bde0d72be65a45c8f3a7cd9f0d0"} Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.479463 4746 generic.go:334] "Generic (PLEG): container finished" podID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" containerID="2ce55d226ad22f07b5960a4f794636d3b6f45f68dc54db108fb98a751cd44675" exitCode=0 Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.479493 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9dz" event={"ID":"7ea31c3c-c4a5-47be-bddf-49b679f030d6","Type":"ContainerDied","Data":"2ce55d226ad22f07b5960a4f794636d3b6f45f68dc54db108fb98a751cd44675"} Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.481647 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-whqdv_5669b14f-d850-4cfc-a7ae-18f880dbccb5/extract-content/0.log" Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.481994 4746 generic.go:334] "Generic (PLEG): container finished" podID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" containerID="d1ea09cb45b85d1a3ebda15b1c13f817b0376d529a22edbd89ed6d49eb05af59" exitCode=2 Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.482030 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whqdv" event={"ID":"5669b14f-d850-4cfc-a7ae-18f880dbccb5","Type":"ContainerDied","Data":"d1ea09cb45b85d1a3ebda15b1c13f817b0376d529a22edbd89ed6d49eb05af59"} Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.604079 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.636163 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be0459f-f161-4203-868a-ba2d577c96d1" path="/var/lib/kubelet/pods/9be0459f-f161-4203-868a-ba2d577c96d1/volumes" Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.636971 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5155500-7423-4b61-8724-8172a012cd8a" path="/var/lib/kubelet/pods/d5155500-7423-4b61-8724-8172a012cd8a/volumes" Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.637600 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" path="/var/lib/kubelet/pods/feadbc9f-ff9b-47e3-bb8a-121af59a7ff7/volumes" Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.672118 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-utilities\") pod \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.672223 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-catalog-content\") pod \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.672272 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ghz8\" (UniqueName: \"kubernetes.io/projected/492f0d06-a8d7-4f81-a86a-5f1caea059cd-kube-api-access-2ghz8\") pod \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\" (UID: \"492f0d06-a8d7-4f81-a86a-5f1caea059cd\") " Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.672945 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-utilities" (OuterVolumeSpecName: "utilities") pod "492f0d06-a8d7-4f81-a86a-5f1caea059cd" (UID: "492f0d06-a8d7-4f81-a86a-5f1caea059cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.677287 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/492f0d06-a8d7-4f81-a86a-5f1caea059cd-kube-api-access-2ghz8" (OuterVolumeSpecName: "kube-api-access-2ghz8") pod "492f0d06-a8d7-4f81-a86a-5f1caea059cd" (UID: "492f0d06-a8d7-4f81-a86a-5f1caea059cd"). InnerVolumeSpecName "kube-api-access-2ghz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.774291 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:13 crc kubenswrapper[4746]: I1211 09:58:13.774354 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ghz8\" (UniqueName: \"kubernetes.io/projected/492f0d06-a8d7-4f81-a86a-5f1caea059cd-kube-api-access-2ghz8\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.489617 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vq5fs" event={"ID":"492f0d06-a8d7-4f81-a86a-5f1caea059cd","Type":"ContainerDied","Data":"f8dec152ec4c959816278e452780b8cfb5efc5fbbc0e308e763bd684709d6cf5"} Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.489671 4746 scope.go:117] "RemoveContainer" containerID="193d64641d157ad28ca0bb97b35fca5097ef7bde0d72be65a45c8f3a7cd9f0d0" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.489781 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vq5fs" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.515270 4746 scope.go:117] "RemoveContainer" containerID="8fe538cc9f6f9bda884a3906a8dcfc213fd9a768d915c3b587d251c1d1095578" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.815988 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mh7kg"] Dec 11 09:58:14 crc kubenswrapper[4746]: E1211 09:58:14.816474 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" containerName="extract-utilities" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816503 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" containerName="extract-utilities" Dec 11 09:58:14 crc kubenswrapper[4746]: E1211 09:58:14.816523 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be0459f-f161-4203-868a-ba2d577c96d1" containerName="extract-utilities" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816528 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be0459f-f161-4203-868a-ba2d577c96d1" containerName="extract-utilities" Dec 11 09:58:14 crc kubenswrapper[4746]: E1211 09:58:14.816537 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816543 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: E1211 09:58:14.816554 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5155500-7423-4b61-8724-8172a012cd8a" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816560 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5155500-7423-4b61-8724-8172a012cd8a" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: E1211 09:58:14.816568 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be0459f-f161-4203-868a-ba2d577c96d1" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816573 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be0459f-f161-4203-868a-ba2d577c96d1" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: E1211 09:58:14.816584 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5155500-7423-4b61-8724-8172a012cd8a" containerName="extract-utilities" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816590 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5155500-7423-4b61-8724-8172a012cd8a" containerName="extract-utilities" Dec 11 09:58:14 crc kubenswrapper[4746]: E1211 09:58:14.816598 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" containerName="marketplace-operator" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816604 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" containerName="marketplace-operator" Dec 11 09:58:14 crc kubenswrapper[4746]: E1211 09:58:14.816613 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" containerName="extract-utilities" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816619 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" containerName="extract-utilities" Dec 11 09:58:14 crc kubenswrapper[4746]: E1211 09:58:14.816627 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816632 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816919 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="09e5c4e1-1e5f-4d59-aca9-414d8d7e72a6" containerName="marketplace-operator" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816932 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816940 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="feadbc9f-ff9b-47e3-bb8a-121af59a7ff7" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816948 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5155500-7423-4b61-8724-8172a012cd8a" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.816959 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be0459f-f161-4203-868a-ba2d577c96d1" containerName="extract-content" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.817741 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.821240 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.835928 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mh7kg"] Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.900747 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-utilities\") pod \"community-operators-mh7kg\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.900807 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-catalog-content\") pod \"community-operators-mh7kg\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:14 crc kubenswrapper[4746]: I1211 09:58:14.900893 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfv6\" (UniqueName: \"kubernetes.io/projected/cc103c2c-47fd-44d7-819d-2a75b4a198de-kube-api-access-qzfv6\") pod \"community-operators-mh7kg\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.002331 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfv6\" (UniqueName: \"kubernetes.io/projected/cc103c2c-47fd-44d7-819d-2a75b4a198de-kube-api-access-qzfv6\") pod \"community-operators-mh7kg\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.002421 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-utilities\") pod \"community-operators-mh7kg\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.002463 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-catalog-content\") pod \"community-operators-mh7kg\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.003198 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-utilities\") pod \"community-operators-mh7kg\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.003240 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-catalog-content\") pod \"community-operators-mh7kg\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.021123 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfv6\" (UniqueName: \"kubernetes.io/projected/cc103c2c-47fd-44d7-819d-2a75b4a198de-kube-api-access-qzfv6\") pod \"community-operators-mh7kg\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.111404 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pvzvj_611ec549-a1d2-4b9a-b67e-2216ec21327a/extract-content/0.log" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.111724 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.132306 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.204862 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-catalog-content\") pod \"611ec549-a1d2-4b9a-b67e-2216ec21327a\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.204925 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw8xr\" (UniqueName: \"kubernetes.io/projected/611ec549-a1d2-4b9a-b67e-2216ec21327a-kube-api-access-mw8xr\") pod \"611ec549-a1d2-4b9a-b67e-2216ec21327a\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.204991 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-utilities\") pod \"611ec549-a1d2-4b9a-b67e-2216ec21327a\" (UID: \"611ec549-a1d2-4b9a-b67e-2216ec21327a\") " Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.213256 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/611ec549-a1d2-4b9a-b67e-2216ec21327a-kube-api-access-mw8xr" (OuterVolumeSpecName: "kube-api-access-mw8xr") pod "611ec549-a1d2-4b9a-b67e-2216ec21327a" (UID: "611ec549-a1d2-4b9a-b67e-2216ec21327a"). InnerVolumeSpecName "kube-api-access-mw8xr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.220786 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-utilities" (OuterVolumeSpecName: "utilities") pod "611ec549-a1d2-4b9a-b67e-2216ec21327a" (UID: "611ec549-a1d2-4b9a-b67e-2216ec21327a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.233393 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "611ec549-a1d2-4b9a-b67e-2216ec21327a" (UID: "611ec549-a1d2-4b9a-b67e-2216ec21327a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.307079 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.307319 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/611ec549-a1d2-4b9a-b67e-2216ec21327a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.307331 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw8xr\" (UniqueName: \"kubernetes.io/projected/611ec549-a1d2-4b9a-b67e-2216ec21327a-kube-api-access-mw8xr\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.330631 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mh7kg"] Dec 11 09:58:15 crc kubenswrapper[4746]: W1211 09:58:15.341492 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc103c2c_47fd_44d7_819d_2a75b4a198de.slice/crio-c18df20c0d0bf65e0cdee0c8e0721119531f59c2eacb7d475b53859646faef84 WatchSource:0}: Error finding container c18df20c0d0bf65e0cdee0c8e0721119531f59c2eacb7d475b53859646faef84: Status 404 returned error can't find the container with id c18df20c0d0bf65e0cdee0c8e0721119531f59c2eacb7d475b53859646faef84 Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.514741 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-pvzvj_611ec549-a1d2-4b9a-b67e-2216ec21327a/extract-content/0.log" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.516979 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvzvj" event={"ID":"611ec549-a1d2-4b9a-b67e-2216ec21327a","Type":"ContainerDied","Data":"93f2714411df2c91bcbd9c4f933403f58dcbf74e496611a726b46b599f6b14b2"} Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.517059 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvzvj" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.517121 4746 scope.go:117] "RemoveContainer" containerID="adc9622de8f5389500cad58e6498d4cfd1caf524bb4badac7870746cd3d1779d" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.520253 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh7kg" event={"ID":"cc103c2c-47fd-44d7-819d-2a75b4a198de","Type":"ContainerStarted","Data":"c18df20c0d0bf65e0cdee0c8e0721119531f59c2eacb7d475b53859646faef84"} Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.527191 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.556492 4746 scope.go:117] "RemoveContainer" containerID="4072a9ff8418162d5c049da6482babdfc97069891f207a32cafa98f1b1836121" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.600080 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvzvj"] Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.610984 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-utilities\") pod \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.611061 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-catalog-content\") pod \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.611259 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xghhn\" (UniqueName: \"kubernetes.io/projected/7ea31c3c-c4a5-47be-bddf-49b679f030d6-kube-api-access-xghhn\") pod \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\" (UID: \"7ea31c3c-c4a5-47be-bddf-49b679f030d6\") " Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.612582 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-utilities" (OuterVolumeSpecName: "utilities") pod "7ea31c3c-c4a5-47be-bddf-49b679f030d6" (UID: "7ea31c3c-c4a5-47be-bddf-49b679f030d6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.622011 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvzvj"] Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.624277 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ea31c3c-c4a5-47be-bddf-49b679f030d6-kube-api-access-xghhn" (OuterVolumeSpecName: "kube-api-access-xghhn") pod "7ea31c3c-c4a5-47be-bddf-49b679f030d6" (UID: "7ea31c3c-c4a5-47be-bddf-49b679f030d6"). InnerVolumeSpecName "kube-api-access-xghhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.638375 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="611ec549-a1d2-4b9a-b67e-2216ec21327a" path="/var/lib/kubelet/pods/611ec549-a1d2-4b9a-b67e-2216ec21327a/volumes" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.672469 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ea31c3c-c4a5-47be-bddf-49b679f030d6" (UID: "7ea31c3c-c4a5-47be-bddf-49b679f030d6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.713288 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xghhn\" (UniqueName: \"kubernetes.io/projected/7ea31c3c-c4a5-47be-bddf-49b679f030d6-kube-api-access-xghhn\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.713319 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:15 crc kubenswrapper[4746]: I1211 09:58:15.713330 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ea31c3c-c4a5-47be-bddf-49b679f030d6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.528727 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh7kg" event={"ID":"cc103c2c-47fd-44d7-819d-2a75b4a198de","Type":"ContainerStarted","Data":"a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d"} Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.530557 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5p9dz" event={"ID":"7ea31c3c-c4a5-47be-bddf-49b679f030d6","Type":"ContainerDied","Data":"2e685609e2a0c86a28a550eae6c08d111501f4799647a2b75c7e87362fa502f6"} Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.530618 4746 scope.go:117] "RemoveContainer" containerID="2ce55d226ad22f07b5960a4f794636d3b6f45f68dc54db108fb98a751cd44675" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.530642 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5p9dz" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.532033 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bgg6h_cccdab48-aeee-44f8-aebc-90129170ea8a/registry-server/0.log" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.533947 4746 generic.go:334] "Generic (PLEG): container finished" podID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerID="4b0bf9ef455186a9aa17f956618011b5a5837b35a1bae0e2178df84cebfe4b54" exitCode=1 Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.534073 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgg6h" event={"ID":"cccdab48-aeee-44f8-aebc-90129170ea8a","Type":"ContainerDied","Data":"4b0bf9ef455186a9aa17f956618011b5a5837b35a1bae0e2178df84cebfe4b54"} Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.612248 4746 scope.go:117] "RemoveContainer" containerID="04fda1472b21ce93cb61e3b765b899279d11dc37f7ed4fa433adf55958375df5" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.703982 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9dz"] Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.708507 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5p9dz"] Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.724820 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-whqdv_5669b14f-d850-4cfc-a7ae-18f880dbccb5/extract-content/0.log" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.725428 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.891250 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8wm4\" (UniqueName: \"kubernetes.io/projected/5669b14f-d850-4cfc-a7ae-18f880dbccb5-kube-api-access-m8wm4\") pod \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.891363 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-catalog-content\") pod \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.891447 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-utilities\") pod \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\" (UID: \"5669b14f-d850-4cfc-a7ae-18f880dbccb5\") " Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.892712 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-utilities" (OuterVolumeSpecName: "utilities") pod "5669b14f-d850-4cfc-a7ae-18f880dbccb5" (UID: "5669b14f-d850-4cfc-a7ae-18f880dbccb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.896196 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5669b14f-d850-4cfc-a7ae-18f880dbccb5-kube-api-access-m8wm4" (OuterVolumeSpecName: "kube-api-access-m8wm4") pod "5669b14f-d850-4cfc-a7ae-18f880dbccb5" (UID: "5669b14f-d850-4cfc-a7ae-18f880dbccb5"). InnerVolumeSpecName "kube-api-access-m8wm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.956690 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5669b14f-d850-4cfc-a7ae-18f880dbccb5" (UID: "5669b14f-d850-4cfc-a7ae-18f880dbccb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.993498 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.993541 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8wm4\" (UniqueName: \"kubernetes.io/projected/5669b14f-d850-4cfc-a7ae-18f880dbccb5-kube-api-access-m8wm4\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:16 crc kubenswrapper[4746]: I1211 09:58:16.993558 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5669b14f-d850-4cfc-a7ae-18f880dbccb5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.543163 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-whqdv_5669b14f-d850-4cfc-a7ae-18f880dbccb5/extract-content/0.log" Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.543729 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-whqdv" event={"ID":"5669b14f-d850-4cfc-a7ae-18f880dbccb5","Type":"ContainerDied","Data":"454977cf9a60c17fca4318cde0a5b65bb22c989fc70f656b386e6f5f88d357b5"} Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.543777 4746 scope.go:117] "RemoveContainer" containerID="d1ea09cb45b85d1a3ebda15b1c13f817b0376d529a22edbd89ed6d49eb05af59" Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.543813 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-whqdv" Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.547542 4746 generic.go:334] "Generic (PLEG): container finished" podID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerID="a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d" exitCode=0 Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.548608 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh7kg" event={"ID":"cc103c2c-47fd-44d7-819d-2a75b4a198de","Type":"ContainerDied","Data":"a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d"} Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.576924 4746 scope.go:117] "RemoveContainer" containerID="18c578b4f4bf1ee1b34f858656297eeb82a22501422f2615499ad07d079de3c1" Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.620216 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-whqdv"] Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.777127 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" path="/var/lib/kubelet/pods/7ea31c3c-c4a5-47be-bddf-49b679f030d6/volumes" Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.781142 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-whqdv"] Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.904413 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jdpcb" Dec 11 09:58:17 crc kubenswrapper[4746]: I1211 09:58:17.955650 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9qmb2"] Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.184830 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bgg6h_cccdab48-aeee-44f8-aebc-90129170ea8a/registry-server/0.log" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.186091 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.381663 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh759\" (UniqueName: \"kubernetes.io/projected/cccdab48-aeee-44f8-aebc-90129170ea8a-kube-api-access-lh759\") pod \"cccdab48-aeee-44f8-aebc-90129170ea8a\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.381821 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-utilities\") pod \"cccdab48-aeee-44f8-aebc-90129170ea8a\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.381895 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-catalog-content\") pod \"cccdab48-aeee-44f8-aebc-90129170ea8a\" (UID: \"cccdab48-aeee-44f8-aebc-90129170ea8a\") " Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.383375 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-utilities" (OuterVolumeSpecName: "utilities") pod "cccdab48-aeee-44f8-aebc-90129170ea8a" (UID: "cccdab48-aeee-44f8-aebc-90129170ea8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.387452 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cccdab48-aeee-44f8-aebc-90129170ea8a-kube-api-access-lh759" (OuterVolumeSpecName: "kube-api-access-lh759") pod "cccdab48-aeee-44f8-aebc-90129170ea8a" (UID: "cccdab48-aeee-44f8-aebc-90129170ea8a"). InnerVolumeSpecName "kube-api-access-lh759". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.483898 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh759\" (UniqueName: \"kubernetes.io/projected/cccdab48-aeee-44f8-aebc-90129170ea8a-kube-api-access-lh759\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.483948 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.518190 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cccdab48-aeee-44f8-aebc-90129170ea8a" (UID: "cccdab48-aeee-44f8-aebc-90129170ea8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.560705 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-bgg6h_cccdab48-aeee-44f8-aebc-90129170ea8a/registry-server/0.log" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.561593 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bgg6h" event={"ID":"cccdab48-aeee-44f8-aebc-90129170ea8a","Type":"ContainerDied","Data":"9ca43008fd1a6b7b40701f7f8f405c46301828c06c009e55f36a0210eb3bce98"} Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.561623 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bgg6h" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.561635 4746 scope.go:117] "RemoveContainer" containerID="4b0bf9ef455186a9aa17f956618011b5a5837b35a1bae0e2178df84cebfe4b54" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.586140 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cccdab48-aeee-44f8-aebc-90129170ea8a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.592900 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bgg6h"] Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.593064 4746 scope.go:117] "RemoveContainer" containerID="1375c6ce221f579ad2e661bb8efa52158186596f09fb5cf692a7a63d02699ca2" Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.618270 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bgg6h"] Dec 11 09:58:18 crc kubenswrapper[4746]: I1211 09:58:18.638483 4746 scope.go:117] "RemoveContainer" containerID="6cacebf26909dd9bee5ec6ccbfe8106bee5365fd590be6e8cb861abf5ee457ce" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.418966 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kdhlr"] Dec 11 09:58:19 crc kubenswrapper[4746]: E1211 09:58:19.419176 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419187 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: E1211 09:58:19.419201 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611ec549-a1d2-4b9a-b67e-2216ec21327a" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419209 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="611ec549-a1d2-4b9a-b67e-2216ec21327a" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: E1211 09:58:19.419219 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerName="registry-server" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419226 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerName="registry-server" Dec 11 09:58:19 crc kubenswrapper[4746]: E1211 09:58:19.419233 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419239 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: E1211 09:58:19.419248 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" containerName="extract-utilities" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419254 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" containerName="extract-utilities" Dec 11 09:58:19 crc kubenswrapper[4746]: E1211 09:58:19.419266 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerName="extract-utilities" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419272 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerName="extract-utilities" Dec 11 09:58:19 crc kubenswrapper[4746]: E1211 09:58:19.419279 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" containerName="extract-utilities" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419285 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" containerName="extract-utilities" Dec 11 09:58:19 crc kubenswrapper[4746]: E1211 09:58:19.419291 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419297 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: E1211 09:58:19.419306 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="611ec549-a1d2-4b9a-b67e-2216ec21327a" containerName="extract-utilities" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419311 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="611ec549-a1d2-4b9a-b67e-2216ec21327a" containerName="extract-utilities" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419390 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" containerName="registry-server" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419401 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="611ec549-a1d2-4b9a-b67e-2216ec21327a" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419410 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.419420 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ea31c3c-c4a5-47be-bddf-49b679f030d6" containerName="extract-content" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.420068 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.425675 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.432751 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kdhlr"] Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.478929 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjsrk\" (UniqueName: \"kubernetes.io/projected/a5a6269e-2d2d-4034-b7af-b0cf87317c98-kube-api-access-kjsrk\") pod \"redhat-marketplace-kdhlr\" (UID: \"a5a6269e-2d2d-4034-b7af-b0cf87317c98\") " pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.478986 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5a6269e-2d2d-4034-b7af-b0cf87317c98-utilities\") pod \"redhat-marketplace-kdhlr\" (UID: \"a5a6269e-2d2d-4034-b7af-b0cf87317c98\") " pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.479012 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5a6269e-2d2d-4034-b7af-b0cf87317c98-catalog-content\") pod \"redhat-marketplace-kdhlr\" (UID: \"a5a6269e-2d2d-4034-b7af-b0cf87317c98\") " pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.580324 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjsrk\" (UniqueName: \"kubernetes.io/projected/a5a6269e-2d2d-4034-b7af-b0cf87317c98-kube-api-access-kjsrk\") pod \"redhat-marketplace-kdhlr\" (UID: \"a5a6269e-2d2d-4034-b7af-b0cf87317c98\") " pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.580430 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5a6269e-2d2d-4034-b7af-b0cf87317c98-utilities\") pod \"redhat-marketplace-kdhlr\" (UID: \"a5a6269e-2d2d-4034-b7af-b0cf87317c98\") " pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.580471 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5a6269e-2d2d-4034-b7af-b0cf87317c98-catalog-content\") pod \"redhat-marketplace-kdhlr\" (UID: \"a5a6269e-2d2d-4034-b7af-b0cf87317c98\") " pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.581420 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5a6269e-2d2d-4034-b7af-b0cf87317c98-catalog-content\") pod \"redhat-marketplace-kdhlr\" (UID: \"a5a6269e-2d2d-4034-b7af-b0cf87317c98\") " pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.581459 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5a6269e-2d2d-4034-b7af-b0cf87317c98-utilities\") pod \"redhat-marketplace-kdhlr\" (UID: \"a5a6269e-2d2d-4034-b7af-b0cf87317c98\") " pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.608341 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjsrk\" (UniqueName: \"kubernetes.io/projected/a5a6269e-2d2d-4034-b7af-b0cf87317c98-kube-api-access-kjsrk\") pod \"redhat-marketplace-kdhlr\" (UID: \"a5a6269e-2d2d-4034-b7af-b0cf87317c98\") " pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.636800 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5669b14f-d850-4cfc-a7ae-18f880dbccb5" path="/var/lib/kubelet/pods/5669b14f-d850-4cfc-a7ae-18f880dbccb5/volumes" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.637840 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cccdab48-aeee-44f8-aebc-90129170ea8a" path="/var/lib/kubelet/pods/cccdab48-aeee-44f8-aebc-90129170ea8a/volumes" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.752956 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:19 crc kubenswrapper[4746]: I1211 09:58:19.951820 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kdhlr"] Dec 11 09:58:20 crc kubenswrapper[4746]: I1211 09:58:20.586574 4746 generic.go:334] "Generic (PLEG): container finished" podID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" containerID="0f90e67c6ac28b4b55963861c4d025d487fba221eb271d048f4a39a9abd274ce" exitCode=0 Dec 11 09:58:20 crc kubenswrapper[4746]: I1211 09:58:20.586647 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kdhlr" event={"ID":"a5a6269e-2d2d-4034-b7af-b0cf87317c98","Type":"ContainerDied","Data":"0f90e67c6ac28b4b55963861c4d025d487fba221eb271d048f4a39a9abd274ce"} Dec 11 09:58:20 crc kubenswrapper[4746]: I1211 09:58:20.586687 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kdhlr" event={"ID":"a5a6269e-2d2d-4034-b7af-b0cf87317c98","Type":"ContainerStarted","Data":"e6fce1e56dbeee4142223b2c6dacdd7acadc1918f1dec5a9dbc46e3cff6d17c5"} Dec 11 09:58:20 crc kubenswrapper[4746]: I1211 09:58:20.965592 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "492f0d06-a8d7-4f81-a86a-5f1caea059cd" (UID: "492f0d06-a8d7-4f81-a86a-5f1caea059cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:20 crc kubenswrapper[4746]: I1211 09:58:20.999711 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/492f0d06-a8d7-4f81-a86a-5f1caea059cd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.140991 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vq5fs"] Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.158770 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vq5fs"] Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.220370 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k24k2"] Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.221629 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.223770 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.233686 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k24k2"] Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.404235 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj87f\" (UniqueName: \"kubernetes.io/projected/115e0a6a-c3eb-4931-af44-a645f0904e3e-kube-api-access-nj87f\") pod \"redhat-operators-k24k2\" (UID: \"115e0a6a-c3eb-4931-af44-a645f0904e3e\") " pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.404281 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115e0a6a-c3eb-4931-af44-a645f0904e3e-utilities\") pod \"redhat-operators-k24k2\" (UID: \"115e0a6a-c3eb-4931-af44-a645f0904e3e\") " pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.404361 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115e0a6a-c3eb-4931-af44-a645f0904e3e-catalog-content\") pod \"redhat-operators-k24k2\" (UID: \"115e0a6a-c3eb-4931-af44-a645f0904e3e\") " pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.505546 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj87f\" (UniqueName: \"kubernetes.io/projected/115e0a6a-c3eb-4931-af44-a645f0904e3e-kube-api-access-nj87f\") pod \"redhat-operators-k24k2\" (UID: \"115e0a6a-c3eb-4931-af44-a645f0904e3e\") " pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.505613 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115e0a6a-c3eb-4931-af44-a645f0904e3e-utilities\") pod \"redhat-operators-k24k2\" (UID: \"115e0a6a-c3eb-4931-af44-a645f0904e3e\") " pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.505657 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115e0a6a-c3eb-4931-af44-a645f0904e3e-catalog-content\") pod \"redhat-operators-k24k2\" (UID: \"115e0a6a-c3eb-4931-af44-a645f0904e3e\") " pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.506121 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/115e0a6a-c3eb-4931-af44-a645f0904e3e-utilities\") pod \"redhat-operators-k24k2\" (UID: \"115e0a6a-c3eb-4931-af44-a645f0904e3e\") " pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.506130 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/115e0a6a-c3eb-4931-af44-a645f0904e3e-catalog-content\") pod \"redhat-operators-k24k2\" (UID: \"115e0a6a-c3eb-4931-af44-a645f0904e3e\") " pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.529937 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj87f\" (UniqueName: \"kubernetes.io/projected/115e0a6a-c3eb-4931-af44-a645f0904e3e-kube-api-access-nj87f\") pod \"redhat-operators-k24k2\" (UID: \"115e0a6a-c3eb-4931-af44-a645f0904e3e\") " pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.552658 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.636920 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="492f0d06-a8d7-4f81-a86a-5f1caea059cd" path="/var/lib/kubelet/pods/492f0d06-a8d7-4f81-a86a-5f1caea059cd/volumes" Dec 11 09:58:21 crc kubenswrapper[4746]: I1211 09:58:21.962886 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k24k2"] Dec 11 09:58:21 crc kubenswrapper[4746]: W1211 09:58:21.967383 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod115e0a6a_c3eb_4931_af44_a645f0904e3e.slice/crio-18ee3d2b983908c5f565f1d04715c8b6bd879534e903e2eab185f0a793215599 WatchSource:0}: Error finding container 18ee3d2b983908c5f565f1d04715c8b6bd879534e903e2eab185f0a793215599: Status 404 returned error can't find the container with id 18ee3d2b983908c5f565f1d04715c8b6bd879534e903e2eab185f0a793215599 Dec 11 09:58:22 crc kubenswrapper[4746]: I1211 09:58:22.628994 4746 generic.go:334] "Generic (PLEG): container finished" podID="115e0a6a-c3eb-4931-af44-a645f0904e3e" containerID="c974ecb79b265aa75cf54f7b575ac3c9f1930885f0b95c20faa635970964f2f2" exitCode=0 Dec 11 09:58:22 crc kubenswrapper[4746]: I1211 09:58:22.630445 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k24k2" event={"ID":"115e0a6a-c3eb-4931-af44-a645f0904e3e","Type":"ContainerDied","Data":"c974ecb79b265aa75cf54f7b575ac3c9f1930885f0b95c20faa635970964f2f2"} Dec 11 09:58:22 crc kubenswrapper[4746]: I1211 09:58:22.630473 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k24k2" event={"ID":"115e0a6a-c3eb-4931-af44-a645f0904e3e","Type":"ContainerStarted","Data":"18ee3d2b983908c5f565f1d04715c8b6bd879534e903e2eab185f0a793215599"} Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.818615 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pwjvd"] Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.822823 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.823873 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pwjvd"] Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.825419 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.852344 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-utilities\") pod \"certified-operators-pwjvd\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.852399 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st5zv\" (UniqueName: \"kubernetes.io/projected/e2d46174-be57-41e7-9363-896cc0a860c6-kube-api-access-st5zv\") pod \"certified-operators-pwjvd\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.852460 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-catalog-content\") pod \"certified-operators-pwjvd\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.953361 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-catalog-content\") pod \"certified-operators-pwjvd\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.953493 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-utilities\") pod \"certified-operators-pwjvd\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.953551 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st5zv\" (UniqueName: \"kubernetes.io/projected/e2d46174-be57-41e7-9363-896cc0a860c6-kube-api-access-st5zv\") pod \"certified-operators-pwjvd\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.954031 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-utilities\") pod \"certified-operators-pwjvd\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.954069 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-catalog-content\") pod \"certified-operators-pwjvd\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:23 crc kubenswrapper[4746]: I1211 09:58:23.974689 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st5zv\" (UniqueName: \"kubernetes.io/projected/e2d46174-be57-41e7-9363-896cc0a860c6-kube-api-access-st5zv\") pod \"certified-operators-pwjvd\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:24 crc kubenswrapper[4746]: I1211 09:58:24.148180 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:28 crc kubenswrapper[4746]: I1211 09:58:28.933690 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pwjvd"] Dec 11 09:58:28 crc kubenswrapper[4746]: W1211 09:58:28.954032 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode2d46174_be57_41e7_9363_896cc0a860c6.slice/crio-4a3c3c1ddcad130fde5b2d285794243bb79c7e62e37f15035b32815c2bf7da16 WatchSource:0}: Error finding container 4a3c3c1ddcad130fde5b2d285794243bb79c7e62e37f15035b32815c2bf7da16: Status 404 returned error can't find the container with id 4a3c3c1ddcad130fde5b2d285794243bb79c7e62e37f15035b32815c2bf7da16 Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.031240 4746 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.032229 4746 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.032810 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2" gracePeriod=15 Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.033067 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.033376 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8" gracePeriod=15 Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.033282 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c" gracePeriod=15 Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.033478 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca" gracePeriod=15 Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034504 4746 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 09:58:29 crc kubenswrapper[4746]: E1211 09:58:29.034619 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034629 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 09:58:29 crc kubenswrapper[4746]: E1211 09:58:29.034637 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034643 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 09:58:29 crc kubenswrapper[4746]: E1211 09:58:29.034650 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034656 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 09:58:29 crc kubenswrapper[4746]: E1211 09:58:29.034665 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034670 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 09:58:29 crc kubenswrapper[4746]: E1211 09:58:29.034682 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034688 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 11 09:58:29 crc kubenswrapper[4746]: E1211 09:58:29.034699 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034705 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 09:58:29 crc kubenswrapper[4746]: E1211 09:58:29.034713 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034727 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034828 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034841 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034850 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034860 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034868 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.034877 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.033356 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631" gracePeriod=15 Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.143965 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.144131 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.144187 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.144234 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.144571 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.144653 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.144721 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.144862 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.245680 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.245764 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.245807 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.245833 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.245856 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.245887 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.245950 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.245978 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.246178 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.246224 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.246254 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.246294 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.246323 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.246349 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.246377 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.246439 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.707203 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwjvd" event={"ID":"e2d46174-be57-41e7-9363-896cc0a860c6","Type":"ContainerStarted","Data":"4a3c3c1ddcad130fde5b2d285794243bb79c7e62e37f15035b32815c2bf7da16"} Dec 11 09:58:29 crc kubenswrapper[4746]: E1211 09:58:29.988789 4746 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.214:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:29 crc kubenswrapper[4746]: I1211 09:58:29.989442 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:30.661276 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:30.662313 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:30.662971 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:30.663214 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:30.663400 4746 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:30.663423 4746 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:30.663601 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="200ms" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:30.717927 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:30.720204 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:30.722462 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8" exitCode=2 Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:30.865141 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="400ms" Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:31.266833 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="800ms" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:31.732211 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:31.734070 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:31.735919 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca" exitCode=0 Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:31.898003 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.214:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-pwjvd.188020cb66603679 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-pwjvd,UID:e2d46174-be57-41e7-9363-896cc0a860c6,APIVersion:v1,ResourceVersion:29494,FieldPath:spec.initContainers{extract-utilities},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 09:58:31.897265785 +0000 UTC m=+284.757129098,LastTimestamp:2025-12-11 09:58:31.897265785 +0000 UTC m=+284.757129098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:32.067933 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="1.6s" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.512453 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" containerName="oauth-openshift" containerID="cri-o://cf603fe439c3bb316a24cb00d8cd008a30d5d63f3751fea73968b6b8db446c18" gracePeriod=15 Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.746460 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.748808 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.750144 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c" exitCode=0 Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.750175 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631" exitCode=0 Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.750188 4746 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2" exitCode=0 Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.750271 4746 scope.go:117] "RemoveContainer" containerID="c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.755887 4746 generic.go:334] "Generic (PLEG): container finished" podID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerID="0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7" exitCode=0 Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.755971 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh7kg" event={"ID":"cc103c2c-47fd-44d7-819d-2a75b4a198de","Type":"ContainerDied","Data":"0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7"} Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.757342 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.759357 4746 generic.go:334] "Generic (PLEG): container finished" podID="1222305b-d227-4d6a-a76b-90a5ade7c176" containerID="5a715be1478cee295d8391bb597f56a30cf0e77aca9c99928a9cf26299498ca5" exitCode=0 Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.759387 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1222305b-d227-4d6a-a76b-90a5ade7c176","Type":"ContainerDied","Data":"5a715be1478cee295d8391bb597f56a30cf0e77aca9c99928a9cf26299498ca5"} Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.759871 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:32.760426 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:33.124110 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.214:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-pwjvd.188020cb66603679 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-pwjvd,UID:e2d46174-be57-41e7-9363-896cc0a860c6,APIVersion:v1,ResourceVersion:29494,FieldPath:spec.initContainers{extract-utilities},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 09:58:31.897265785 +0000 UTC m=+284.757129098,LastTimestamp:2025-12-11 09:58:31.897265785 +0000 UTC m=+284.757129098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.261962 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.263328 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.264414 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.265157 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.265635 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.318617 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.318783 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.318936 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.319119 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.319119 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.319231 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.319457 4746 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.319492 4746 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.319515 4746 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.639227 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 11 09:58:33 crc kubenswrapper[4746]: E1211 09:58:33.669426 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="3.2s" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.772393 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.773503 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.774371 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.775473 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.775795 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.777255 4746 generic.go:334] "Generic (PLEG): container finished" podID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" containerID="cf603fe439c3bb316a24cb00d8cd008a30d5d63f3751fea73968b6b8db446c18" exitCode=0 Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.777292 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" event={"ID":"6cae8380-85f7-4534-9bfc-46c5a3d6711f","Type":"ContainerDied","Data":"cf603fe439c3bb316a24cb00d8cd008a30d5d63f3751fea73968b6b8db446c18"} Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.777879 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.778257 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:33 crc kubenswrapper[4746]: I1211 09:58:33.778731 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:34 crc kubenswrapper[4746]: I1211 09:58:34.992409 4746 scope.go:117] "RemoveContainer" containerID="a1446429e062412badab876e2b00e1ea2b797cc47457bccf741bb163a5489a7c" Dec 11 09:58:35 crc kubenswrapper[4746]: W1211 09:58:35.048584 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-929e12eda6831de2123a177d4ab01ca9ad5a328f1ce620c7806f8b206fe6eed7 WatchSource:0}: Error finding container 929e12eda6831de2123a177d4ab01ca9ad5a328f1ce620c7806f8b206fe6eed7: Status 404 returned error can't find the container with id 929e12eda6831de2123a177d4ab01ca9ad5a328f1ce620c7806f8b206fe6eed7 Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.087425 4746 scope.go:117] "RemoveContainer" containerID="c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627" Dec 11 09:58:35 crc kubenswrapper[4746]: E1211 09:58:35.088254 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\": container with ID starting with c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627 not found: ID does not exist" containerID="c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.088317 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627"} err="failed to get container status \"c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\": rpc error: code = NotFound desc = could not find container \"c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627\": container with ID starting with c4efdb3c546e7aa1c2645f1319ec54d9eb6d750c5a83e5a466b6de7f45731627 not found: ID does not exist" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.088365 4746 scope.go:117] "RemoveContainer" containerID="3f8e30191c719d58630915df4106ef86df474159593c8932053805f69dcce4ca" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.096935 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.097842 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.098188 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.098554 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.110480 4746 scope.go:117] "RemoveContainer" containerID="433b796d020ada8f907237e63bf27fbc4c57a30ec0a5291d1bf1b40c3f540631" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.132882 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.134393 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.134710 4746 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.134887 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.135071 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.141662 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-var-lock\") pod \"1222305b-d227-4d6a-a76b-90a5ade7c176\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.141709 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1222305b-d227-4d6a-a76b-90a5ade7c176-kube-api-access\") pod \"1222305b-d227-4d6a-a76b-90a5ade7c176\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.141753 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-var-lock" (OuterVolumeSpecName: "var-lock") pod "1222305b-d227-4d6a-a76b-90a5ade7c176" (UID: "1222305b-d227-4d6a-a76b-90a5ade7c176"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.142524 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-kubelet-dir\") pod \"1222305b-d227-4d6a-a76b-90a5ade7c176\" (UID: \"1222305b-d227-4d6a-a76b-90a5ade7c176\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.142589 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1222305b-d227-4d6a-a76b-90a5ade7c176" (UID: "1222305b-d227-4d6a-a76b-90a5ade7c176"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.142929 4746 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.142986 4746 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1222305b-d227-4d6a-a76b-90a5ade7c176-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.150613 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1222305b-d227-4d6a-a76b-90a5ade7c176-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1222305b-d227-4d6a-a76b-90a5ade7c176" (UID: "1222305b-d227-4d6a-a76b-90a5ade7c176"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.153871 4746 scope.go:117] "RemoveContainer" containerID="0595fd35019dd78737f0e2287eae9d70fd00ca7d9470f4625344e0811d7293e8" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243235 4746 scope.go:117] "RemoveContainer" containerID="95dd2894c17d2dd2cc7c0ca316394bee095d48f3ccc727860606b0932cc2d1d2" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243483 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-ocp-branding-template\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243711 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-login\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243755 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-serving-cert\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243781 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-provider-selection\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243805 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-service-ca\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243860 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-idp-0-file-data\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243887 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-cliconfig\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243910 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-dir\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243936 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-error\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.243978 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-session\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.244009 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-router-certs\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.244075 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqql8\" (UniqueName: \"kubernetes.io/projected/6cae8380-85f7-4534-9bfc-46c5a3d6711f-kube-api-access-hqql8\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.244098 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-policies\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.244126 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-trusted-ca-bundle\") pod \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\" (UID: \"6cae8380-85f7-4534-9bfc-46c5a3d6711f\") " Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.244468 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1222305b-d227-4d6a-a76b-90a5ade7c176-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.245008 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.245678 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.246689 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.247734 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.249584 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.253572 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.254127 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.254820 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cae8380-85f7-4534-9bfc-46c5a3d6711f-kube-api-access-hqql8" (OuterVolumeSpecName: "kube-api-access-hqql8") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "kube-api-access-hqql8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.254896 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.255602 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.264986 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.266276 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.267153 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.267833 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6cae8380-85f7-4534-9bfc-46c5a3d6711f" (UID: "6cae8380-85f7-4534-9bfc-46c5a3d6711f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.274638 4746 scope.go:117] "RemoveContainer" containerID="767b264bee6944c3f289dfafc6761fd477174f5a72a3aeb4b6ae14de06962f2a" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.345841 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.345879 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.345889 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.345898 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.345946 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.345955 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.345963 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.345974 4746 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.345984 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.345993 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.346002 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.346013 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqql8\" (UniqueName: \"kubernetes.io/projected/6cae8380-85f7-4534-9bfc-46c5a3d6711f-kube-api-access-hqql8\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.346023 4746 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.346031 4746 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6cae8380-85f7-4534-9bfc-46c5a3d6711f-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.789648 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh7kg" event={"ID":"cc103c2c-47fd-44d7-819d-2a75b4a198de","Type":"ContainerStarted","Data":"f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952"} Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.790429 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.790645 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.790955 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.791134 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81"} Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.791175 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"929e12eda6831de2123a177d4ab01ca9ad5a328f1ce620c7806f8b206fe6eed7"} Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.791596 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: E1211 09:58:35.791757 4746 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.214:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.791895 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.792248 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.793786 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"1222305b-d227-4d6a-a76b-90a5ade7c176","Type":"ContainerDied","Data":"d04d4367ce506d030a49e9b7d054dc23041b4956e17d134bd27f316f9fc21af4"} Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.793895 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d04d4367ce506d030a49e9b7d054dc23041b4956e17d134bd27f316f9fc21af4" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.793790 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.795158 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" event={"ID":"6cae8380-85f7-4534-9bfc-46c5a3d6711f","Type":"ContainerDied","Data":"e96e6ac187d2015be30d93701f63f32eec5308a45ef8d4c4e50ebfa015299656"} Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.795216 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.795304 4746 scope.go:117] "RemoveContainer" containerID="cf603fe439c3bb316a24cb00d8cd008a30d5d63f3751fea73968b6b8db446c18" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.796029 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.796320 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.796631 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.797483 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.797807 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.798086 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.798639 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.798940 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.799175 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.807351 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k24k2" event={"ID":"115e0a6a-c3eb-4931-af44-a645f0904e3e","Type":"ContainerStarted","Data":"7a382e06b83847f191450b5d356edbc1e835faf092696691bafc286d9647a3bc"} Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.808223 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.815247 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.815929 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.816653 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.817198 4746 generic.go:334] "Generic (PLEG): container finished" podID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" containerID="0d81c599cdafc87ac5114e1838f7a3eb829e684ed3063305217e02566da80d12" exitCode=0 Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.817359 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kdhlr" event={"ID":"a5a6269e-2d2d-4034-b7af-b0cf87317c98","Type":"ContainerDied","Data":"0d81c599cdafc87ac5114e1838f7a3eb829e684ed3063305217e02566da80d12"} Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.818291 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.818513 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.818773 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.819262 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.819812 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.820323 4746 generic.go:334] "Generic (PLEG): container finished" podID="e2d46174-be57-41e7-9363-896cc0a860c6" containerID="72e141d9e68cbabd825a7615d334b4e9d578851585d860c3103683ff3df8b75b" exitCode=0 Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.820398 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwjvd" event={"ID":"e2d46174-be57-41e7-9363-896cc0a860c6","Type":"ContainerDied","Data":"72e141d9e68cbabd825a7615d334b4e9d578851585d860c3103683ff3df8b75b"} Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.821668 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.822112 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.822516 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.823367 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.823822 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:35 crc kubenswrapper[4746]: I1211 09:58:35.824247 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:36 crc kubenswrapper[4746]: I1211 09:58:36.831364 4746 generic.go:334] "Generic (PLEG): container finished" podID="115e0a6a-c3eb-4931-af44-a645f0904e3e" containerID="7a382e06b83847f191450b5d356edbc1e835faf092696691bafc286d9647a3bc" exitCode=0 Dec 11 09:58:36 crc kubenswrapper[4746]: I1211 09:58:36.831454 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k24k2" event={"ID":"115e0a6a-c3eb-4931-af44-a645f0904e3e","Type":"ContainerDied","Data":"7a382e06b83847f191450b5d356edbc1e835faf092696691bafc286d9647a3bc"} Dec 11 09:58:36 crc kubenswrapper[4746]: I1211 09:58:36.832839 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:36 crc kubenswrapper[4746]: I1211 09:58:36.833248 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:36 crc kubenswrapper[4746]: I1211 09:58:36.833496 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:36 crc kubenswrapper[4746]: E1211 09:58:36.833530 4746 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.214:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 09:58:36 crc kubenswrapper[4746]: I1211 09:58:36.833675 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:36 crc kubenswrapper[4746]: I1211 09:58:36.833894 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:36 crc kubenswrapper[4746]: I1211 09:58:36.834118 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:36 crc kubenswrapper[4746]: E1211 09:58:36.870989 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="6.4s" Dec 11 09:58:37 crc kubenswrapper[4746]: I1211 09:58:37.634483 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:37 crc kubenswrapper[4746]: I1211 09:58:37.634950 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:37 crc kubenswrapper[4746]: I1211 09:58:37.635338 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:37 crc kubenswrapper[4746]: I1211 09:58:37.635970 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:37 crc kubenswrapper[4746]: I1211 09:58:37.636741 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:37 crc kubenswrapper[4746]: I1211 09:58:37.637424 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.857370 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k24k2" event={"ID":"115e0a6a-c3eb-4931-af44-a645f0904e3e","Type":"ContainerStarted","Data":"0047bbd0f165f700f39e8c968901b4b98c459716fba85d871dd00772089a34a6"} Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.859100 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.859289 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.859475 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.859658 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.859980 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.860423 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.861457 4746 generic.go:334] "Generic (PLEG): container finished" podID="e2d46174-be57-41e7-9363-896cc0a860c6" containerID="cdda24f4ac99292532b48682f5dff5215972edb6a17d16233b224c9e0225b625" exitCode=0 Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.861500 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwjvd" event={"ID":"e2d46174-be57-41e7-9363-896cc0a860c6","Type":"ContainerDied","Data":"cdda24f4ac99292532b48682f5dff5215972edb6a17d16233b224c9e0225b625"} Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.862607 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.864555 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.866389 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.867611 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.868092 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:39 crc kubenswrapper[4746]: I1211 09:58:39.868581 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: I1211 09:58:41.553226 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:41 crc kubenswrapper[4746]: I1211 09:58:41.553834 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:41 crc kubenswrapper[4746]: E1211 09:58:41.841613 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:58:41Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:58:41Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:58:41Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:58:41Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: E1211 09:58:41.842000 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: E1211 09:58:41.842570 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: E1211 09:58:41.843186 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: E1211 09:58:41.843645 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: E1211 09:58:41.843671 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 09:58:41 crc kubenswrapper[4746]: I1211 09:58:41.886560 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwjvd" event={"ID":"e2d46174-be57-41e7-9363-896cc0a860c6","Type":"ContainerStarted","Data":"57615ae741ee918ab363e11afeb2fdbe055a09950702b1201dff8d781e461ecf"} Dec 11 09:58:41 crc kubenswrapper[4746]: I1211 09:58:41.887493 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: I1211 09:58:41.888115 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: I1211 09:58:41.888490 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: I1211 09:58:41.888805 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: I1211 09:58:41.889251 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:41 crc kubenswrapper[4746]: I1211 09:58:41.889510 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:42 crc kubenswrapper[4746]: I1211 09:58:42.996837 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" podUID="d9d880ef-9ac5-4686-bf49-77406ca35135" containerName="registry" containerID="cri-o://da766278b2272b5480fe84d156e96d5caff9d1f205a50852d11b90d860597cf3" gracePeriod=30 Dec 11 09:58:43 crc kubenswrapper[4746]: E1211 09:58:43.125959 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.214:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-pwjvd.188020cb66603679 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-pwjvd,UID:e2d46174-be57-41e7-9363-896cc0a860c6,APIVersion:v1,ResourceVersion:29494,FieldPath:spec.initContainers{extract-utilities},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 09:58:31.897265785 +0000 UTC m=+284.757129098,LastTimestamp:2025-12-11 09:58:31.897265785 +0000 UTC m=+284.757129098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 09:58:43 crc kubenswrapper[4746]: E1211 09:58:43.273162 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="7s" Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.304225 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k24k2" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" containerName="registry-server" probeResult="failure" output=< Dec 11 09:58:43 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Dec 11 09:58:43 crc kubenswrapper[4746]: > Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.631336 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.633300 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.633758 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.633989 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.634925 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.635700 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.636101 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.650433 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0f95af3-00dc-44a8-98d1-a870f2276f19" Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.650486 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0f95af3-00dc-44a8-98d1-a870f2276f19" Dec 11 09:58:43 crc kubenswrapper[4746]: E1211 09:58:43.651182 4746 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:43 crc kubenswrapper[4746]: I1211 09:58:43.652094 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.148973 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.149086 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.490990 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.492507 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.493128 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.493557 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.493998 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.494272 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.494518 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.566700 4746 patch_prober.go:28] interesting pod/image-registry-697d97f7c8-9qmb2 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.29:5000/healthz\": dial tcp 10.217.0.29:5000: connect: connection refused" start-of-body= Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.566775 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" podUID="d9d880ef-9ac5-4686-bf49-77406ca35135" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.29:5000/healthz\": dial tcp 10.217.0.29:5000: connect: connection refused" Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.775832 4746 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 09:58:44 crc kubenswrapper[4746]: I1211 09:58:44.775912 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 09:58:45 crc kubenswrapper[4746]: I1211 09:58:45.132762 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:45 crc kubenswrapper[4746]: I1211 09:58:45.132833 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:45 crc kubenswrapper[4746]: I1211 09:58:45.186999 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:45 crc kubenswrapper[4746]: I1211 09:58:45.188200 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:45 crc kubenswrapper[4746]: I1211 09:58:45.188790 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:45 crc kubenswrapper[4746]: I1211 09:58:45.189242 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:45 crc kubenswrapper[4746]: I1211 09:58:45.189525 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:45 crc kubenswrapper[4746]: I1211 09:58:45.189857 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:45 crc kubenswrapper[4746]: I1211 09:58:45.190236 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:46 crc kubenswrapper[4746]: I1211 09:58:45.999804 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mh7kg" Dec 11 09:58:46 crc kubenswrapper[4746]: I1211 09:58:46.001018 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:46 crc kubenswrapper[4746]: I1211 09:58:46.001672 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:46 crc kubenswrapper[4746]: I1211 09:58:46.002321 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:46 crc kubenswrapper[4746]: I1211 09:58:46.002649 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:46 crc kubenswrapper[4746]: I1211 09:58:46.003001 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:46 crc kubenswrapper[4746]: I1211 09:58:46.003337 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:47 crc kubenswrapper[4746]: I1211 09:58:47.637296 4746 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:47 crc kubenswrapper[4746]: I1211 09:58:47.640036 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:47 crc kubenswrapper[4746]: I1211 09:58:47.640741 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:47 crc kubenswrapper[4746]: I1211 09:58:47.641672 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:47 crc kubenswrapper[4746]: I1211 09:58:47.642306 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:47 crc kubenswrapper[4746]: I1211 09:58:47.643189 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:47 crc kubenswrapper[4746]: I1211 09:58:47.643811 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:48 crc kubenswrapper[4746]: I1211 09:58:48.857557 4746 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 11 09:58:48 crc kubenswrapper[4746]: I1211 09:58:48.858247 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 11 09:58:50 crc kubenswrapper[4746]: E1211 09:58:50.274730 4746 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" interval="7s" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.708337 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-697d97f7c8-9qmb2_d9d880ef-9ac5-4686-bf49-77406ca35135/registry/0.log" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.708394 4746 generic.go:334] "Generic (PLEG): container finished" podID="d9d880ef-9ac5-4686-bf49-77406ca35135" containerID="da766278b2272b5480fe84d156e96d5caff9d1f205a50852d11b90d860597cf3" exitCode=-1 Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.708504 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" event={"ID":"d9d880ef-9ac5-4686-bf49-77406ca35135","Type":"ContainerDied","Data":"da766278b2272b5480fe84d156e96d5caff9d1f205a50852d11b90d860597cf3"} Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.711548 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.711698 4746 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2" exitCode=1 Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.711731 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2"} Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.712507 4746 scope.go:117] "RemoveContainer" containerID="64dba670c06d4902ef4e0f01fbf3fd1e3dbc999eec85d83d7a62af8e412642f2" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.712861 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.713338 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.713715 4746 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.714138 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.714388 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.714631 4746 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.714850 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:50 crc kubenswrapper[4746]: I1211 09:58:50.715141 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:51 crc kubenswrapper[4746]: W1211 09:58:51.149172 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-069fcf948fccbdfda9086f2d297536b189316ddac317656907fc033ef047da11 WatchSource:0}: Error finding container 069fcf948fccbdfda9086f2d297536b189316ddac317656907fc033ef047da11: Status 404 returned error can't find the container with id 069fcf948fccbdfda9086f2d297536b189316ddac317656907fc033ef047da11 Dec 11 09:58:51 crc kubenswrapper[4746]: E1211 09:58:51.866391 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:58:51Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:58:51Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:58:51Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-11T09:58:51Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:51 crc kubenswrapper[4746]: E1211 09:58:51.866747 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:51 crc kubenswrapper[4746]: E1211 09:58:51.867125 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:51 crc kubenswrapper[4746]: E1211 09:58:51.867716 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:51 crc kubenswrapper[4746]: E1211 09:58:51.868435 4746 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:51 crc kubenswrapper[4746]: E1211 09:58:51.868460 4746 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.001948 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.002766 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.003100 4746 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.003867 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.005117 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.005300 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.005456 4746 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.005616 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.005774 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.072970 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k24k2" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.073875 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.074302 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.074662 4746 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.075178 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.075832 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.076135 4746 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.076707 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:52 crc kubenswrapper[4746]: I1211 09:58:52.077535 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:53 crc kubenswrapper[4746]: E1211 09:58:53.128550 4746 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.214:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-pwjvd.188020cb66603679 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-pwjvd,UID:e2d46174-be57-41e7-9363-896cc0a860c6,APIVersion:v1,ResourceVersion:29494,FieldPath:spec.initContainers{extract-utilities},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-11 09:58:31.897265785 +0000 UTC m=+284.757129098,LastTimestamp:2025-12-11 09:58:31.897265785 +0000 UTC m=+284.757129098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.201521 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.202337 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.203143 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.203613 4746 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.203925 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.204849 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.205166 4746 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.205448 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.205950 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.775181 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.927832 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-697d97f7c8-9qmb2_d9d880ef-9ac5-4686-bf49-77406ca35135/registry/0.log" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.927962 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.928990 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.929762 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.930799 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.931953 4746 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.932430 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.932772 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.933194 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.933591 4746 status_manager.go:851] "Failed to get status for pod" podUID="d9d880ef-9ac5-4686-bf49-77406ca35135" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-9qmb2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.933985 4746 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.947067 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-certificates\") pod \"d9d880ef-9ac5-4686-bf49-77406ca35135\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.947153 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9d880ef-9ac5-4686-bf49-77406ca35135-installation-pull-secrets\") pod \"d9d880ef-9ac5-4686-bf49-77406ca35135\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.947243 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-trusted-ca\") pod \"d9d880ef-9ac5-4686-bf49-77406ca35135\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.947312 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9d880ef-9ac5-4686-bf49-77406ca35135-ca-trust-extracted\") pod \"d9d880ef-9ac5-4686-bf49-77406ca35135\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.947369 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-tls\") pod \"d9d880ef-9ac5-4686-bf49-77406ca35135\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.947805 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"d9d880ef-9ac5-4686-bf49-77406ca35135\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.947993 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-bound-sa-token\") pod \"d9d880ef-9ac5-4686-bf49-77406ca35135\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.948096 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z59cm\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-kube-api-access-z59cm\") pod \"d9d880ef-9ac5-4686-bf49-77406ca35135\" (UID: \"d9d880ef-9ac5-4686-bf49-77406ca35135\") " Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.948879 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "d9d880ef-9ac5-4686-bf49-77406ca35135" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.948900 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "d9d880ef-9ac5-4686-bf49-77406ca35135" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.955255 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "d9d880ef-9ac5-4686-bf49-77406ca35135" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.956727 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "d9d880ef-9ac5-4686-bf49-77406ca35135" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.957459 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-kube-api-access-z59cm" (OuterVolumeSpecName: "kube-api-access-z59cm") pod "d9d880ef-9ac5-4686-bf49-77406ca35135" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135"). InnerVolumeSpecName "kube-api-access-z59cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.958930 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d880ef-9ac5-4686-bf49-77406ca35135-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "d9d880ef-9ac5-4686-bf49-77406ca35135" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 09:58:54 crc kubenswrapper[4746]: I1211 09:58:54.967567 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9d880ef-9ac5-4686-bf49-77406ca35135-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "d9d880ef-9ac5-4686-bf49-77406ca35135" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.049997 4746 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.050040 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z59cm\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-kube-api-access-z59cm\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.050117 4746 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.050130 4746 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d9d880ef-9ac5-4686-bf49-77406ca35135-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.050143 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d9d880ef-9ac5-4686-bf49-77406ca35135-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.050157 4746 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d9d880ef-9ac5-4686-bf49-77406ca35135-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.050181 4746 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d9d880ef-9ac5-4686-bf49-77406ca35135-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.459353 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_image-registry-697d97f7c8-9qmb2_d9d880ef-9ac5-4686-bf49-77406ca35135/registry/0.log" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.459673 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" event={"ID":"d9d880ef-9ac5-4686-bf49-77406ca35135","Type":"ContainerDied","Data":"04436326e4e6babc725d7d217a5fc8e107b577671a47f22ac5528cb304643680"} Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.459979 4746 scope.go:117] "RemoveContainer" containerID="da766278b2272b5480fe84d156e96d5caff9d1f205a50852d11b90d860597cf3" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.469378 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"069fcf948fccbdfda9086f2d297536b189316ddac317656907fc033ef047da11"} Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.469915 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0f95af3-00dc-44a8-98d1-a870f2276f19" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.469946 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0f95af3-00dc-44a8-98d1-a870f2276f19" Dec 11 09:58:55 crc kubenswrapper[4746]: E1211 09:58:55.470876 4746 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.470963 4746 status_manager.go:851] "Failed to get status for pod" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" pod="openshift-marketplace/community-operators-mh7kg" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-mh7kg\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.471757 4746 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.472343 4746 status_manager.go:851] "Failed to get status for pod" podUID="115e0a6a-c3eb-4931-af44-a645f0904e3e" pod="openshift-marketplace/redhat-operators-k24k2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-k24k2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.472780 4746 status_manager.go:851] "Failed to get status for pod" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.473345 4746 status_manager.go:851] "Failed to get status for pod" podUID="a5a6269e-2d2d-4034-b7af-b0cf87317c98" pod="openshift-marketplace/redhat-marketplace-kdhlr" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-kdhlr\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.473795 4746 status_manager.go:851] "Failed to get status for pod" podUID="d9d880ef-9ac5-4686-bf49-77406ca35135" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/pods/image-registry-697d97f7c8-9qmb2\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.474478 4746 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.474994 4746 status_manager.go:851] "Failed to get status for pod" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" pod="openshift-marketplace/certified-operators-pwjvd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-pwjvd\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.475683 4746 status_manager.go:851] "Failed to get status for pod" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" pod="openshift-authentication/oauth-openshift-558db77b4-czfmv" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-czfmv\": dial tcp 38.102.83.214:6443: connect: connection refused" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.531649 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "d9d880ef-9ac5-4686-bf49-77406ca35135" (UID: "d9d880ef-9ac5-4686-bf49-77406ca35135"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 11 09:58:55 crc kubenswrapper[4746]: I1211 09:58:55.869429 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:58:56 crc kubenswrapper[4746]: I1211 09:58:56.484129 4746 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="3a4fc675035c60b98a7a8e5c5e0dd364f7b994934a3867d8dd776918c77b00f5" exitCode=0 Dec 11 09:58:56 crc kubenswrapper[4746]: I1211 09:58:56.484318 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c8e598add4df3bfdbdc55abc713df58a1d57f646ab9020babcfdb3668bf4a3dd"} Dec 11 09:58:56 crc kubenswrapper[4746]: I1211 09:58:56.484368 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"3a4fc675035c60b98a7a8e5c5e0dd364f7b994934a3867d8dd776918c77b00f5"} Dec 11 09:58:56 crc kubenswrapper[4746]: I1211 09:58:56.489268 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kdhlr" event={"ID":"a5a6269e-2d2d-4034-b7af-b0cf87317c98","Type":"ContainerStarted","Data":"1562f4d1903bb685bf6a6817567e2aff71d233b2f85ee4addf27530682f519a7"} Dec 11 09:58:56 crc kubenswrapper[4746]: I1211 09:58:56.495943 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 11 09:58:56 crc kubenswrapper[4746]: I1211 09:58:56.496335 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"44559f27041b5faaa11a033dc4e51def193985d627f51ae51813a2c3073e1192"} Dec 11 09:58:56 crc kubenswrapper[4746]: I1211 09:58:56.496459 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-9qmb2" Dec 11 09:58:57 crc kubenswrapper[4746]: I1211 09:58:57.505402 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b5f44ee5fd51fb8e9e3ff85de04049460bd38f8be28c92620f600079963099e5"} Dec 11 09:58:58 crc kubenswrapper[4746]: I1211 09:58:58.520847 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7ec54a73e17381627077c402a9cf01467242595bf60dac74c8d4ad0d7a50762d"} Dec 11 09:58:58 crc kubenswrapper[4746]: I1211 09:58:58.521361 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c0944a1394c041fc5e8a75481f2beb4ae13d57aae61829cd108a5e65c16ff039"} Dec 11 09:58:58 crc kubenswrapper[4746]: I1211 09:58:58.857739 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:58:59 crc kubenswrapper[4746]: I1211 09:58:59.532615 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"142ac58c986a14bb3f1eadbc7f280f0bbe0991cd4aeb82dcea1f21f682276382"} Dec 11 09:58:59 crc kubenswrapper[4746]: I1211 09:58:59.533670 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0f95af3-00dc-44a8-98d1-a870f2276f19" Dec 11 09:58:59 crc kubenswrapper[4746]: I1211 09:58:59.533702 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0f95af3-00dc-44a8-98d1-a870f2276f19" Dec 11 09:58:59 crc kubenswrapper[4746]: I1211 09:58:59.533705 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:59 crc kubenswrapper[4746]: I1211 09:58:59.544115 4746 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:58:59 crc kubenswrapper[4746]: I1211 09:58:59.753573 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:59 crc kubenswrapper[4746]: I1211 09:58:59.754279 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:58:59 crc kubenswrapper[4746]: I1211 09:58:59.810079 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:59:00 crc kubenswrapper[4746]: I1211 09:59:00.543957 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0f95af3-00dc-44a8-98d1-a870f2276f19" Dec 11 09:59:00 crc kubenswrapper[4746]: I1211 09:59:00.545651 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0f95af3-00dc-44a8-98d1-a870f2276f19" Dec 11 09:59:00 crc kubenswrapper[4746]: I1211 09:59:00.629529 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kdhlr" Dec 11 09:59:02 crc kubenswrapper[4746]: I1211 09:59:02.838659 4746 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="b16c4ea7-51a0-468a-8b42-3e5f46f4b9ab" Dec 11 09:59:03 crc kubenswrapper[4746]: I1211 09:59:03.590189 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 11 09:59:03 crc kubenswrapper[4746]: I1211 09:59:03.592419 4746 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870" exitCode=1 Dec 11 09:59:03 crc kubenswrapper[4746]: I1211 09:59:03.592635 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870"} Dec 11 09:59:03 crc kubenswrapper[4746]: I1211 09:59:03.593627 4746 scope.go:117] "RemoveContainer" containerID="af712f3d4436bf3a8074b7ca8631de3614f0befc8220bb275c94c9afb0133870" Dec 11 09:59:04 crc kubenswrapper[4746]: I1211 09:59:04.606298 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Dec 11 09:59:04 crc kubenswrapper[4746]: I1211 09:59:04.607185 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8ca2858057d32e6902f480cb5c73c7cffb76dcccdfa50b841ca1d06cf490a570"} Dec 11 09:59:05 crc kubenswrapper[4746]: I1211 09:59:05.871233 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:59:05 crc kubenswrapper[4746]: I1211 09:59:05.874435 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:59:06 crc kubenswrapper[4746]: I1211 09:59:06.631263 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 11 09:59:25 crc kubenswrapper[4746]: I1211 09:59:25.890841 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 11 09:59:30 crc kubenswrapper[4746]: I1211 09:59:30.406235 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 11 09:59:30 crc kubenswrapper[4746]: I1211 09:59:30.625326 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 11 09:59:30 crc kubenswrapper[4746]: I1211 09:59:30.962488 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 11 09:59:31 crc kubenswrapper[4746]: I1211 09:59:31.114223 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 11 09:59:32 crc kubenswrapper[4746]: I1211 09:59:32.218314 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 11 09:59:32 crc kubenswrapper[4746]: I1211 09:59:32.278639 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 11 09:59:32 crc kubenswrapper[4746]: I1211 09:59:32.603701 4746 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 11 09:59:32 crc kubenswrapper[4746]: I1211 09:59:32.684639 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 11 09:59:32 crc kubenswrapper[4746]: I1211 09:59:32.832830 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 11 09:59:32 crc kubenswrapper[4746]: I1211 09:59:32.863316 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 11 09:59:32 crc kubenswrapper[4746]: I1211 09:59:32.904950 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 09:59:33 crc kubenswrapper[4746]: I1211 09:59:33.078598 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 11 09:59:33 crc kubenswrapper[4746]: I1211 09:59:33.206879 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 11 09:59:33 crc kubenswrapper[4746]: I1211 09:59:33.518998 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 11 09:59:33 crc kubenswrapper[4746]: I1211 09:59:33.749421 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 11 09:59:33 crc kubenswrapper[4746]: I1211 09:59:33.803568 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 11 09:59:34 crc kubenswrapper[4746]: I1211 09:59:34.031266 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 11 09:59:34 crc kubenswrapper[4746]: I1211 09:59:34.483864 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 11 09:59:34 crc kubenswrapper[4746]: I1211 09:59:34.862253 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 11 09:59:34 crc kubenswrapper[4746]: I1211 09:59:34.940460 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 11 09:59:35 crc kubenswrapper[4746]: I1211 09:59:35.007671 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 11 09:59:35 crc kubenswrapper[4746]: I1211 09:59:35.548419 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 11 09:59:35 crc kubenswrapper[4746]: I1211 09:59:35.691584 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 11 09:59:35 crc kubenswrapper[4746]: I1211 09:59:35.711500 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 11 09:59:35 crc kubenswrapper[4746]: I1211 09:59:35.752659 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 11 09:59:35 crc kubenswrapper[4746]: I1211 09:59:35.828609 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 11 09:59:35 crc kubenswrapper[4746]: I1211 09:59:35.892892 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 11 09:59:35 crc kubenswrapper[4746]: I1211 09:59:35.998801 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 11 09:59:36 crc kubenswrapper[4746]: I1211 09:59:36.505915 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 11 09:59:36 crc kubenswrapper[4746]: I1211 09:59:36.549965 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 11 09:59:36 crc kubenswrapper[4746]: I1211 09:59:36.635759 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 11 09:59:36 crc kubenswrapper[4746]: I1211 09:59:36.937414 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 11 09:59:36 crc kubenswrapper[4746]: I1211 09:59:36.993288 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 11 09:59:37 crc kubenswrapper[4746]: I1211 09:59:37.000560 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 11 09:59:37 crc kubenswrapper[4746]: I1211 09:59:37.085010 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 11 09:59:37 crc kubenswrapper[4746]: I1211 09:59:37.092242 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 09:59:37 crc kubenswrapper[4746]: I1211 09:59:37.296622 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 11 09:59:37 crc kubenswrapper[4746]: I1211 09:59:37.607175 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 11 09:59:37 crc kubenswrapper[4746]: I1211 09:59:37.736695 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 11 09:59:37 crc kubenswrapper[4746]: I1211 09:59:37.828513 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 11 09:59:37 crc kubenswrapper[4746]: I1211 09:59:37.935369 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 11 09:59:38 crc kubenswrapper[4746]: I1211 09:59:38.074121 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 11 09:59:38 crc kubenswrapper[4746]: I1211 09:59:38.078452 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 11 09:59:38 crc kubenswrapper[4746]: I1211 09:59:38.078542 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 11 09:59:38 crc kubenswrapper[4746]: I1211 09:59:38.111614 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 11 09:59:38 crc kubenswrapper[4746]: I1211 09:59:38.150450 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 11 09:59:38 crc kubenswrapper[4746]: I1211 09:59:38.169232 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 11 09:59:38 crc kubenswrapper[4746]: I1211 09:59:38.257865 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 11 09:59:38 crc kubenswrapper[4746]: I1211 09:59:38.447658 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 11 09:59:38 crc kubenswrapper[4746]: I1211 09:59:38.645480 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 11 09:59:38 crc kubenswrapper[4746]: I1211 09:59:38.926254 4746 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.486844 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.588952 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.612997 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.632568 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.796396 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.809337 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.825726 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.861841 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.927515 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.946319 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 11 09:59:39 crc kubenswrapper[4746]: I1211 09:59:39.995449 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 11 09:59:40 crc kubenswrapper[4746]: I1211 09:59:40.296085 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 11 09:59:40 crc kubenswrapper[4746]: I1211 09:59:40.304909 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 11 09:59:40 crc kubenswrapper[4746]: I1211 09:59:40.584032 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 11 09:59:40 crc kubenswrapper[4746]: I1211 09:59:40.657101 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.085382 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.130343 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.198251 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.230534 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.247413 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.339163 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.472397 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.525868 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.532919 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.661401 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.855082 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.859778 4746 generic.go:334] "Generic (PLEG): container finished" podID="71b25ce7-0542-4bbf-a7c7-ae760345ede3" containerID="cc526a9a0106d332ef44cae57384b0ea3faa64b3235a4e58cc5345770785002a" exitCode=0 Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.859858 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" event={"ID":"71b25ce7-0542-4bbf-a7c7-ae760345ede3","Type":"ContainerDied","Data":"cc526a9a0106d332ef44cae57384b0ea3faa64b3235a4e58cc5345770785002a"} Dec 11 09:59:41 crc kubenswrapper[4746]: I1211 09:59:41.861919 4746 scope.go:117] "RemoveContainer" containerID="cc526a9a0106d332ef44cae57384b0ea3faa64b3235a4e58cc5345770785002a" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.022627 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.127344 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.130086 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.165747 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.182172 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.185422 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.227156 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.252107 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.261853 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.287552 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.328130 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.397169 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.411393 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.807247 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.869707 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hfln2_71b25ce7-0542-4bbf-a7c7-ae760345ede3/marketplace-operator/1.log" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.870569 4746 generic.go:334] "Generic (PLEG): container finished" podID="71b25ce7-0542-4bbf-a7c7-ae760345ede3" containerID="e243b6bd4661d4bb0645f67673d57d18cb7ac83325094485dd6cdb303ce5e0c0" exitCode=1 Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.870627 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" event={"ID":"71b25ce7-0542-4bbf-a7c7-ae760345ede3","Type":"ContainerDied","Data":"e243b6bd4661d4bb0645f67673d57d18cb7ac83325094485dd6cdb303ce5e0c0"} Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.870725 4746 scope.go:117] "RemoveContainer" containerID="cc526a9a0106d332ef44cae57384b0ea3faa64b3235a4e58cc5345770785002a" Dec 11 09:59:42 crc kubenswrapper[4746]: I1211 09:59:42.871509 4746 scope.go:117] "RemoveContainer" containerID="e243b6bd4661d4bb0645f67673d57d18cb7ac83325094485dd6cdb303ce5e0c0" Dec 11 09:59:42 crc kubenswrapper[4746]: E1211 09:59:42.872156 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-hfln2_openshift-marketplace(71b25ce7-0542-4bbf-a7c7-ae760345ede3)\"" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" podUID="71b25ce7-0542-4bbf-a7c7-ae760345ede3" Dec 11 09:59:43 crc kubenswrapper[4746]: I1211 09:59:43.262817 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 11 09:59:43 crc kubenswrapper[4746]: I1211 09:59:43.467801 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 11 09:59:43 crc kubenswrapper[4746]: I1211 09:59:43.650260 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 11 09:59:43 crc kubenswrapper[4746]: I1211 09:59:43.832149 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 11 09:59:43 crc kubenswrapper[4746]: I1211 09:59:43.858711 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 11 09:59:43 crc kubenswrapper[4746]: I1211 09:59:43.887399 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hfln2_71b25ce7-0542-4bbf-a7c7-ae760345ede3/marketplace-operator/1.log" Dec 11 09:59:43 crc kubenswrapper[4746]: I1211 09:59:43.896579 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.028779 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.116838 4746 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.118172 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k24k2" podStartSLOduration=67.352508607 podStartE2EDuration="1m23.118134944s" podCreationTimestamp="2025-12-11 09:58:21 +0000 UTC" firstStartedPulling="2025-12-11 09:58:23.360849418 +0000 UTC m=+276.220712762" lastFinishedPulling="2025-12-11 09:58:39.126475786 +0000 UTC m=+291.986339099" observedRunningTime="2025-12-11 09:59:02.779182908 +0000 UTC m=+315.639046221" watchObservedRunningTime="2025-12-11 09:59:44.118134944 +0000 UTC m=+356.977998297" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.122337 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kdhlr" podStartSLOduration=57.375563708 podStartE2EDuration="1m25.122319128s" podCreationTimestamp="2025-12-11 09:58:19 +0000 UTC" firstStartedPulling="2025-12-11 09:58:23.361415674 +0000 UTC m=+276.221279017" lastFinishedPulling="2025-12-11 09:58:51.108171114 +0000 UTC m=+303.968034437" observedRunningTime="2025-12-11 09:59:02.813151057 +0000 UTC m=+315.673014390" watchObservedRunningTime="2025-12-11 09:59:44.122319128 +0000 UTC m=+356.982182481" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.122964 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pwjvd" podStartSLOduration=76.26219758 podStartE2EDuration="1m21.122955765s" podCreationTimestamp="2025-12-11 09:58:23 +0000 UTC" firstStartedPulling="2025-12-11 09:58:35.822226209 +0000 UTC m=+288.682089522" lastFinishedPulling="2025-12-11 09:58:40.682984394 +0000 UTC m=+293.542847707" observedRunningTime="2025-12-11 09:59:02.858195185 +0000 UTC m=+315.718058498" watchObservedRunningTime="2025-12-11 09:59:44.122955765 +0000 UTC m=+356.982819119" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.124555 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mh7kg" podStartSLOduration=72.599909557 podStartE2EDuration="1m30.124545479s" podCreationTimestamp="2025-12-11 09:58:14 +0000 UTC" firstStartedPulling="2025-12-11 09:58:17.554832139 +0000 UTC m=+270.414695482" lastFinishedPulling="2025-12-11 09:58:35.079468091 +0000 UTC m=+287.939331404" observedRunningTime="2025-12-11 09:59:02.910208162 +0000 UTC m=+315.770071485" watchObservedRunningTime="2025-12-11 09:59:44.124545479 +0000 UTC m=+356.984408822" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.125680 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-9qmb2","openshift-authentication/oauth-openshift-558db77b4-czfmv","openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.125783 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f96647944-qfkbc","openshift-kube-apiserver/kube-apiserver-crc"] Dec 11 09:59:44 crc kubenswrapper[4746]: E1211 09:59:44.126326 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" containerName="installer" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.126375 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" containerName="installer" Dec 11 09:59:44 crc kubenswrapper[4746]: E1211 09:59:44.126411 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" containerName="oauth-openshift" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.126431 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" containerName="oauth-openshift" Dec 11 09:59:44 crc kubenswrapper[4746]: E1211 09:59:44.126474 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d880ef-9ac5-4686-bf49-77406ca35135" containerName="registry" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.126491 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d880ef-9ac5-4686-bf49-77406ca35135" containerName="registry" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.126578 4746 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0f95af3-00dc-44a8-98d1-a870f2276f19" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.126630 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0f95af3-00dc-44a8-98d1-a870f2276f19" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.126736 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d880ef-9ac5-4686-bf49-77406ca35135" containerName="registry" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.126801 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" containerName="oauth-openshift" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.126825 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="1222305b-d227-4d6a-a76b-90a5ade7c176" containerName="installer" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.127762 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.134729 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.135685 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.136131 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.136864 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.137520 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.138549 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.138775 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.138971 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.139229 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.139534 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.139981 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.140008 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.140378 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.147172 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.179958 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.181918 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.206004 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=45.205984896 podStartE2EDuration="45.205984896s" podCreationTimestamp="2025-12-11 09:58:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:59:44.204600288 +0000 UTC m=+357.064463621" watchObservedRunningTime="2025-12-11 09:59:44.205984896 +0000 UTC m=+357.065848219" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.207469 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.290550 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.293580 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-audit-policies\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.293643 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.293698 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.293730 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.293775 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.293808 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.293836 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7qz\" (UniqueName: \"kubernetes.io/projected/b50ae948-a801-47fd-b3be-3b4d41f9e156-kube-api-access-8z7qz\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.293861 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.293886 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b50ae948-a801-47fd-b3be-3b4d41f9e156-audit-dir\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.294071 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.294133 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.294232 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.294262 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.294308 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.390092 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.395343 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.395400 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.395427 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.395447 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.395473 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z7qz\" (UniqueName: \"kubernetes.io/projected/b50ae948-a801-47fd-b3be-3b4d41f9e156-kube-api-access-8z7qz\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.395497 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.395524 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b50ae948-a801-47fd-b3be-3b4d41f9e156-audit-dir\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.395560 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.395785 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b50ae948-a801-47fd-b3be-3b4d41f9e156-audit-dir\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.396731 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.396745 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.396837 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.396911 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.396994 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.397069 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-audit-policies\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.397106 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.398491 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.399582 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-audit-policies\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.399981 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.410844 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.430114 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.433439 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.434692 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.434785 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.435258 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.445314 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.447637 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b50ae948-a801-47fd-b3be-3b4d41f9e156-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.450878 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z7qz\" (UniqueName: \"kubernetes.io/projected/b50ae948-a801-47fd-b3be-3b4d41f9e156-kube-api-access-8z7qz\") pod \"oauth-openshift-6f96647944-qfkbc\" (UID: \"b50ae948-a801-47fd-b3be-3b4d41f9e156\") " pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.457131 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.476089 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.507395 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.581446 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.603555 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 09:59:44 crc kubenswrapper[4746]: I1211 09:59:44.641014 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.060734 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.086652 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.449554 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.558143 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.638397 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cae8380-85f7-4534-9bfc-46c5a3d6711f" path="/var/lib/kubelet/pods/6cae8380-85f7-4534-9bfc-46c5a3d6711f/volumes" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.639594 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d880ef-9ac5-4686-bf49-77406ca35135" path="/var/lib/kubelet/pods/d9d880ef-9ac5-4686-bf49-77406ca35135/volumes" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.787660 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.864808 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.866285 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.895500 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.901097 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 09:59:45 crc kubenswrapper[4746]: I1211 09:59:45.905745 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 11 09:59:46 crc kubenswrapper[4746]: I1211 09:59:46.004782 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 11 09:59:46 crc kubenswrapper[4746]: I1211 09:59:46.063441 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 11 09:59:46 crc kubenswrapper[4746]: I1211 09:59:46.106276 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 11 09:59:46 crc kubenswrapper[4746]: I1211 09:59:46.171499 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 11 09:59:46 crc kubenswrapper[4746]: I1211 09:59:46.318773 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 11 09:59:46 crc kubenswrapper[4746]: I1211 09:59:46.420023 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 11 09:59:46 crc kubenswrapper[4746]: I1211 09:59:46.654155 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 09:59:46 crc kubenswrapper[4746]: I1211 09:59:46.913682 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 11 09:59:46 crc kubenswrapper[4746]: I1211 09:59:46.982548 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 11 09:59:47 crc kubenswrapper[4746]: I1211 09:59:47.205953 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 11 09:59:47 crc kubenswrapper[4746]: I1211 09:59:47.382505 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 11 09:59:47 crc kubenswrapper[4746]: I1211 09:59:47.434312 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 11 09:59:47 crc kubenswrapper[4746]: I1211 09:59:47.508610 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 11 09:59:47 crc kubenswrapper[4746]: I1211 09:59:47.675303 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=1.6752754410000001 podStartE2EDuration="1.675275441s" podCreationTimestamp="2025-12-11 09:59:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 09:59:47.673380828 +0000 UTC m=+360.533244151" watchObservedRunningTime="2025-12-11 09:59:47.675275441 +0000 UTC m=+360.535138754" Dec 11 09:59:47 crc kubenswrapper[4746]: I1211 09:59:47.928742 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 11 09:59:48 crc kubenswrapper[4746]: I1211 09:59:48.084996 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 11 09:59:48 crc kubenswrapper[4746]: I1211 09:59:48.157149 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 11 09:59:48 crc kubenswrapper[4746]: I1211 09:59:48.189598 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 11 09:59:48 crc kubenswrapper[4746]: I1211 09:59:48.480859 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 11 09:59:48 crc kubenswrapper[4746]: I1211 09:59:48.653672 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:59:48 crc kubenswrapper[4746]: I1211 09:59:48.653732 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:59:48 crc kubenswrapper[4746]: I1211 09:59:48.658530 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:59:48 crc kubenswrapper[4746]: I1211 09:59:48.737528 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 11 09:59:48 crc kubenswrapper[4746]: I1211 09:59:48.906525 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 11 09:59:48 crc kubenswrapper[4746]: I1211 09:59:48.930257 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.019320 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.019375 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.020292 4746 scope.go:117] "RemoveContainer" containerID="e243b6bd4661d4bb0645f67673d57d18cb7ac83325094485dd6cdb303ce5e0c0" Dec 11 09:59:49 crc kubenswrapper[4746]: E1211 09:59:49.020487 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-hfln2_openshift-marketplace(71b25ce7-0542-4bbf-a7c7-ae760345ede3)\"" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" podUID="71b25ce7-0542-4bbf-a7c7-ae760345ede3" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.162285 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.172301 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.308996 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.528769 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.528805 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.577345 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.682495 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.747318 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 11 09:59:49 crc kubenswrapper[4746]: I1211 09:59:49.807415 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 11 09:59:50 crc kubenswrapper[4746]: I1211 09:59:50.714982 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 11 09:59:50 crc kubenswrapper[4746]: I1211 09:59:50.799459 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 11 09:59:50 crc kubenswrapper[4746]: I1211 09:59:50.847284 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 11 09:59:50 crc kubenswrapper[4746]: I1211 09:59:50.920833 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 11 09:59:50 crc kubenswrapper[4746]: I1211 09:59:50.960281 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 11 09:59:50 crc kubenswrapper[4746]: I1211 09:59:50.967248 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 11 09:59:51 crc kubenswrapper[4746]: I1211 09:59:51.023911 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 11 09:59:51 crc kubenswrapper[4746]: I1211 09:59:51.131027 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 11 09:59:51 crc kubenswrapper[4746]: I1211 09:59:51.301335 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 11 09:59:51 crc kubenswrapper[4746]: I1211 09:59:51.385972 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 11 09:59:51 crc kubenswrapper[4746]: I1211 09:59:51.387892 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 11 09:59:51 crc kubenswrapper[4746]: I1211 09:59:51.544706 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 11 09:59:51 crc kubenswrapper[4746]: I1211 09:59:51.607288 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 11 09:59:52 crc kubenswrapper[4746]: I1211 09:59:52.232261 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 11 09:59:52 crc kubenswrapper[4746]: I1211 09:59:52.266332 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 09:59:52 crc kubenswrapper[4746]: I1211 09:59:52.284901 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 11 09:59:52 crc kubenswrapper[4746]: I1211 09:59:52.297625 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 11 09:59:52 crc kubenswrapper[4746]: I1211 09:59:52.480123 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 11 09:59:52 crc kubenswrapper[4746]: I1211 09:59:52.563395 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 11 09:59:52 crc kubenswrapper[4746]: I1211 09:59:52.573289 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 11 09:59:52 crc kubenswrapper[4746]: I1211 09:59:52.684497 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 11 09:59:52 crc kubenswrapper[4746]: I1211 09:59:52.732320 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 11 09:59:52 crc kubenswrapper[4746]: I1211 09:59:52.933301 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 09:59:53 crc kubenswrapper[4746]: I1211 09:59:53.064616 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 11 09:59:53 crc kubenswrapper[4746]: I1211 09:59:53.167491 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 11 09:59:53 crc kubenswrapper[4746]: I1211 09:59:53.186622 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 11 09:59:53 crc kubenswrapper[4746]: I1211 09:59:53.314583 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 11 09:59:53 crc kubenswrapper[4746]: I1211 09:59:53.416420 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 11 09:59:53 crc kubenswrapper[4746]: I1211 09:59:53.496614 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 11 09:59:53 crc kubenswrapper[4746]: I1211 09:59:53.499211 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 11 09:59:53 crc kubenswrapper[4746]: I1211 09:59:53.504723 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 11 09:59:53 crc kubenswrapper[4746]: I1211 09:59:53.567157 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 11 09:59:53 crc kubenswrapper[4746]: I1211 09:59:53.807908 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 11 09:59:54 crc kubenswrapper[4746]: I1211 09:59:54.146619 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 11 09:59:54 crc kubenswrapper[4746]: I1211 09:59:54.221584 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 11 09:59:54 crc kubenswrapper[4746]: I1211 09:59:54.289149 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 11 09:59:54 crc kubenswrapper[4746]: I1211 09:59:54.416903 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 09:59:54 crc kubenswrapper[4746]: I1211 09:59:54.720310 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 11 09:59:54 crc kubenswrapper[4746]: I1211 09:59:54.737694 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 11 09:59:54 crc kubenswrapper[4746]: I1211 09:59:54.850113 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 11 09:59:54 crc kubenswrapper[4746]: I1211 09:59:54.925771 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 11 09:59:55 crc kubenswrapper[4746]: I1211 09:59:55.055324 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 11 09:59:55 crc kubenswrapper[4746]: I1211 09:59:55.075327 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 11 09:59:55 crc kubenswrapper[4746]: I1211 09:59:55.080016 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 11 09:59:55 crc kubenswrapper[4746]: I1211 09:59:55.092294 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 11 09:59:55 crc kubenswrapper[4746]: I1211 09:59:55.253989 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 11 09:59:55 crc kubenswrapper[4746]: I1211 09:59:55.360026 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 11 09:59:55 crc kubenswrapper[4746]: I1211 09:59:55.424631 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 11 09:59:55 crc kubenswrapper[4746]: I1211 09:59:55.491839 4746 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 11 09:59:56 crc kubenswrapper[4746]: I1211 09:59:56.057661 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 11 09:59:56 crc kubenswrapper[4746]: I1211 09:59:56.282755 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 11 09:59:56 crc kubenswrapper[4746]: I1211 09:59:56.339524 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 11 09:59:56 crc kubenswrapper[4746]: I1211 09:59:56.388708 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 11 09:59:56 crc kubenswrapper[4746]: I1211 09:59:56.480656 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 11 09:59:56 crc kubenswrapper[4746]: I1211 09:59:56.690286 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 09:59:56 crc kubenswrapper[4746]: I1211 09:59:56.900830 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 11 09:59:56 crc kubenswrapper[4746]: I1211 09:59:56.902286 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 11 09:59:56 crc kubenswrapper[4746]: I1211 09:59:56.935763 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 11 09:59:57 crc kubenswrapper[4746]: I1211 09:59:57.011940 4746 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 11 09:59:57 crc kubenswrapper[4746]: I1211 09:59:57.046736 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 11 09:59:57 crc kubenswrapper[4746]: I1211 09:59:57.314496 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 11 09:59:57 crc kubenswrapper[4746]: I1211 09:59:57.764584 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 09:59:57 crc kubenswrapper[4746]: I1211 09:59:57.885595 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 11 09:59:57 crc kubenswrapper[4746]: I1211 09:59:57.953262 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 11 09:59:58 crc kubenswrapper[4746]: I1211 09:59:58.168335 4746 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 09:59:58 crc kubenswrapper[4746]: I1211 09:59:58.168826 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81" gracePeriod=5 Dec 11 09:59:58 crc kubenswrapper[4746]: I1211 09:59:58.350138 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 11 09:59:58 crc kubenswrapper[4746]: I1211 09:59:58.601533 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 11 09:59:58 crc kubenswrapper[4746]: I1211 09:59:58.620976 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 11 09:59:58 crc kubenswrapper[4746]: I1211 09:59:58.821827 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 11 09:59:58 crc kubenswrapper[4746]: I1211 09:59:58.863026 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 11 09:59:59 crc kubenswrapper[4746]: I1211 09:59:59.013413 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 11 09:59:59 crc kubenswrapper[4746]: I1211 09:59:59.259932 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 09:59:59 crc kubenswrapper[4746]: I1211 09:59:59.691024 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 11 09:59:59 crc kubenswrapper[4746]: I1211 09:59:59.702896 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 11 09:59:59 crc kubenswrapper[4746]: I1211 09:59:59.727034 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 09:59:59 crc kubenswrapper[4746]: I1211 09:59:59.809717 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 11 09:59:59 crc kubenswrapper[4746]: I1211 09:59:59.854569 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 11 09:59:59 crc kubenswrapper[4746]: I1211 09:59:59.877130 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 09:59:59 crc kubenswrapper[4746]: I1211 09:59:59.877211 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.192247 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg"] Dec 11 10:00:00 crc kubenswrapper[4746]: E1211 10:00:00.192667 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.192694 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.192845 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.193430 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.195696 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.196246 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.280773 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75cd9e37-60e9-42c2-9566-29af704fd01f-config-volume\") pod \"collect-profiles-29424120-l55sg\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.281395 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75cd9e37-60e9-42c2-9566-29af704fd01f-secret-volume\") pod \"collect-profiles-29424120-l55sg\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.281471 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkbg4\" (UniqueName: \"kubernetes.io/projected/75cd9e37-60e9-42c2-9566-29af704fd01f-kube-api-access-rkbg4\") pod \"collect-profiles-29424120-l55sg\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.285296 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.383593 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75cd9e37-60e9-42c2-9566-29af704fd01f-secret-volume\") pod \"collect-profiles-29424120-l55sg\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.383726 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkbg4\" (UniqueName: \"kubernetes.io/projected/75cd9e37-60e9-42c2-9566-29af704fd01f-kube-api-access-rkbg4\") pod \"collect-profiles-29424120-l55sg\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.384242 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75cd9e37-60e9-42c2-9566-29af704fd01f-config-volume\") pod \"collect-profiles-29424120-l55sg\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.386737 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75cd9e37-60e9-42c2-9566-29af704fd01f-config-volume\") pod \"collect-profiles-29424120-l55sg\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.404450 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75cd9e37-60e9-42c2-9566-29af704fd01f-secret-volume\") pod \"collect-profiles-29424120-l55sg\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.419934 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkbg4\" (UniqueName: \"kubernetes.io/projected/75cd9e37-60e9-42c2-9566-29af704fd01f-kube-api-access-rkbg4\") pod \"collect-profiles-29424120-l55sg\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.514164 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.542195 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f96647944-qfkbc"] Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.551532 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.557837 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg"] Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.637426 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.757306 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f96647944-qfkbc"] Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.824204 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg"] Dec 11 10:00:00 crc kubenswrapper[4746]: W1211 10:00:00.831177 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75cd9e37_60e9_42c2_9566_29af704fd01f.slice/crio-008a8f43ed9a6feb1b7dbf262825efd65c3b8ee8af9c85cb8a550581626e6a9b WatchSource:0}: Error finding container 008a8f43ed9a6feb1b7dbf262825efd65c3b8ee8af9c85cb8a550581626e6a9b: Status 404 returned error can't find the container with id 008a8f43ed9a6feb1b7dbf262825efd65c3b8ee8af9c85cb8a550581626e6a9b Dec 11 10:00:00 crc kubenswrapper[4746]: I1211 10:00:00.941770 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 11 10:00:01 crc kubenswrapper[4746]: I1211 10:00:01.006866 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" event={"ID":"75cd9e37-60e9-42c2-9566-29af704fd01f","Type":"ContainerStarted","Data":"0ab6992344d6860830874a03cb33b6644204477d33e050d995bd93c3667c2e41"} Dec 11 10:00:01 crc kubenswrapper[4746]: I1211 10:00:01.006919 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" event={"ID":"75cd9e37-60e9-42c2-9566-29af704fd01f","Type":"ContainerStarted","Data":"008a8f43ed9a6feb1b7dbf262825efd65c3b8ee8af9c85cb8a550581626e6a9b"} Dec 11 10:00:01 crc kubenswrapper[4746]: I1211 10:00:01.010133 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" event={"ID":"b50ae948-a801-47fd-b3be-3b4d41f9e156","Type":"ContainerStarted","Data":"09e1a9028eba6d22dc1bf40a2a1f78353934c7e3d652c9bd0582a10cad0f546e"} Dec 11 10:00:01 crc kubenswrapper[4746]: I1211 10:00:01.026590 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" podStartSLOduration=1.026567661 podStartE2EDuration="1.026567661s" podCreationTimestamp="2025-12-11 10:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:00:01.022967084 +0000 UTC m=+373.882830397" watchObservedRunningTime="2025-12-11 10:00:01.026567661 +0000 UTC m=+373.886430974" Dec 11 10:00:01 crc kubenswrapper[4746]: I1211 10:00:01.046417 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 11 10:00:01 crc kubenswrapper[4746]: I1211 10:00:01.239606 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 11 10:00:01 crc kubenswrapper[4746]: I1211 10:00:01.517406 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 11 10:00:01 crc kubenswrapper[4746]: I1211 10:00:01.629832 4746 scope.go:117] "RemoveContainer" containerID="e243b6bd4661d4bb0645f67673d57d18cb7ac83325094485dd6cdb303ce5e0c0" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:02.017119 4746 generic.go:334] "Generic (PLEG): container finished" podID="75cd9e37-60e9-42c2-9566-29af704fd01f" containerID="0ab6992344d6860830874a03cb33b6644204477d33e050d995bd93c3667c2e41" exitCode=0 Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:02.017179 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" event={"ID":"75cd9e37-60e9-42c2-9566-29af704fd01f","Type":"ContainerDied","Data":"0ab6992344d6860830874a03cb33b6644204477d33e050d995bd93c3667c2e41"} Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:02.020304 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" event={"ID":"b50ae948-a801-47fd-b3be-3b4d41f9e156","Type":"ContainerStarted","Data":"37bac7000c206fcafbda17dfde963e45598ff910b8dd2086a2cd9f8d8e37b4ac"} Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:02.023550 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:02.029384 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:02.078236 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f96647944-qfkbc" podStartSLOduration=115.078216473 podStartE2EDuration="1m55.078216473s" podCreationTimestamp="2025-12-11 09:58:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:00:02.077853773 +0000 UTC m=+374.937717096" watchObservedRunningTime="2025-12-11 10:00:02.078216473 +0000 UTC m=+374.938079786" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:02.914727 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.032597 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hfln2_71b25ce7-0542-4bbf-a7c7-ae760345ede3/marketplace-operator/1.log" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.032753 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" event={"ID":"71b25ce7-0542-4bbf-a7c7-ae760345ede3","Type":"ContainerStarted","Data":"c0d0d1b32700654418ff2715575ff0408b4a49dbf309dd4609800220f30b2eab"} Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.354309 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.429026 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75cd9e37-60e9-42c2-9566-29af704fd01f-config-volume\") pod \"75cd9e37-60e9-42c2-9566-29af704fd01f\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.429207 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkbg4\" (UniqueName: \"kubernetes.io/projected/75cd9e37-60e9-42c2-9566-29af704fd01f-kube-api-access-rkbg4\") pod \"75cd9e37-60e9-42c2-9566-29af704fd01f\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.429240 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75cd9e37-60e9-42c2-9566-29af704fd01f-secret-volume\") pod \"75cd9e37-60e9-42c2-9566-29af704fd01f\" (UID: \"75cd9e37-60e9-42c2-9566-29af704fd01f\") " Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.430763 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75cd9e37-60e9-42c2-9566-29af704fd01f-config-volume" (OuterVolumeSpecName: "config-volume") pod "75cd9e37-60e9-42c2-9566-29af704fd01f" (UID: "75cd9e37-60e9-42c2-9566-29af704fd01f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.435695 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cd9e37-60e9-42c2-9566-29af704fd01f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "75cd9e37-60e9-42c2-9566-29af704fd01f" (UID: "75cd9e37-60e9-42c2-9566-29af704fd01f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.435805 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75cd9e37-60e9-42c2-9566-29af704fd01f-kube-api-access-rkbg4" (OuterVolumeSpecName: "kube-api-access-rkbg4") pod "75cd9e37-60e9-42c2-9566-29af704fd01f" (UID: "75cd9e37-60e9-42c2-9566-29af704fd01f"). InnerVolumeSpecName "kube-api-access-rkbg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.485531 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.531171 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75cd9e37-60e9-42c2-9566-29af704fd01f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.531279 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkbg4\" (UniqueName: \"kubernetes.io/projected/75cd9e37-60e9-42c2-9566-29af704fd01f-kube-api-access-rkbg4\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.531324 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/75cd9e37-60e9-42c2-9566-29af704fd01f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.593810 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.736455 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.736552 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.835274 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.835366 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.835451 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.835490 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.835518 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.835576 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.835657 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.835716 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.835724 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.836213 4746 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.836236 4746 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.836251 4746 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.836265 4746 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.848560 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.878063 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 11 10:00:03 crc kubenswrapper[4746]: I1211 10:00:03.937623 4746 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.041526 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" event={"ID":"75cd9e37-60e9-42c2-9566-29af704fd01f","Type":"ContainerDied","Data":"008a8f43ed9a6feb1b7dbf262825efd65c3b8ee8af9c85cb8a550581626e6a9b"} Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.041576 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="008a8f43ed9a6feb1b7dbf262825efd65c3b8ee8af9c85cb8a550581626e6a9b" Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.041597 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg" Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.045688 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.045741 4746 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81" exitCode=137 Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.046393 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.046800 4746 scope.go:117] "RemoveContainer" containerID="c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81" Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.065548 4746 scope.go:117] "RemoveContainer" containerID="c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81" Dec 11 10:00:04 crc kubenswrapper[4746]: E1211 10:00:04.066103 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81\": container with ID starting with c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81 not found: ID does not exist" containerID="c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81" Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.066148 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81"} err="failed to get container status \"c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81\": rpc error: code = NotFound desc = could not find container \"c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81\": container with ID starting with c8feab726c9f0c01017a2d31a24bf145c32f297dce47fe22c7c9e74f8fb20b81 not found: ID does not exist" Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.103530 4746 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 11 10:00:04 crc kubenswrapper[4746]: I1211 10:00:04.389230 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 11 10:00:05 crc kubenswrapper[4746]: I1211 10:00:05.639887 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 11 10:00:05 crc kubenswrapper[4746]: I1211 10:00:05.641582 4746 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 11 10:00:05 crc kubenswrapper[4746]: I1211 10:00:05.661123 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 10:00:05 crc kubenswrapper[4746]: I1211 10:00:05.661159 4746 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ad50aac3-e870-4b29-9109-f971b6e6097a" Dec 11 10:00:05 crc kubenswrapper[4746]: I1211 10:00:05.665318 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 11 10:00:05 crc kubenswrapper[4746]: I1211 10:00:05.665365 4746 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="ad50aac3-e870-4b29-9109-f971b6e6097a" Dec 11 10:00:05 crc kubenswrapper[4746]: I1211 10:00:05.930571 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 11 10:00:07 crc kubenswrapper[4746]: I1211 10:00:07.278641 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 11 10:00:09 crc kubenswrapper[4746]: I1211 10:00:09.019828 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 10:00:09 crc kubenswrapper[4746]: I1211 10:00:09.027005 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hfln2" Dec 11 10:00:29 crc kubenswrapper[4746]: I1211 10:00:29.877705 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:00:29 crc kubenswrapper[4746]: I1211 10:00:29.878341 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:00:33 crc kubenswrapper[4746]: I1211 10:00:33.850025 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-26ppb"] Dec 11 10:00:33 crc kubenswrapper[4746]: I1211 10:00:33.850923 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" podUID="ce723dd2-6ea2-49d1-9faf-c92026630754" containerName="controller-manager" containerID="cri-o://666ce9fd7b9dd4cb5863d39532feb20a4ac288eba70cd1f8db4bde8a63a12cee" gracePeriod=30 Dec 11 10:00:33 crc kubenswrapper[4746]: I1211 10:00:33.992070 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2"] Dec 11 10:00:33 crc kubenswrapper[4746]: I1211 10:00:33.992371 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" podUID="fda084f4-b624-4036-8da8-27d83af188ba" containerName="route-controller-manager" containerID="cri-o://c45c54b7abf7af48cb84f22474ff44540a7f1f5d16f48000e7b7515718fe0433" gracePeriod=30 Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.237458 4746 generic.go:334] "Generic (PLEG): container finished" podID="fda084f4-b624-4036-8da8-27d83af188ba" containerID="c45c54b7abf7af48cb84f22474ff44540a7f1f5d16f48000e7b7515718fe0433" exitCode=0 Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.237553 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" event={"ID":"fda084f4-b624-4036-8da8-27d83af188ba","Type":"ContainerDied","Data":"c45c54b7abf7af48cb84f22474ff44540a7f1f5d16f48000e7b7515718fe0433"} Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.241174 4746 generic.go:334] "Generic (PLEG): container finished" podID="ce723dd2-6ea2-49d1-9faf-c92026630754" containerID="666ce9fd7b9dd4cb5863d39532feb20a4ac288eba70cd1f8db4bde8a63a12cee" exitCode=0 Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.241227 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" event={"ID":"ce723dd2-6ea2-49d1-9faf-c92026630754","Type":"ContainerDied","Data":"666ce9fd7b9dd4cb5863d39532feb20a4ac288eba70cd1f8db4bde8a63a12cee"} Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.435194 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.599777 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "fda084f4-b624-4036-8da8-27d83af188ba" (UID: "fda084f4-b624-4036-8da8-27d83af188ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.599965 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-client-ca\") pod \"fda084f4-b624-4036-8da8-27d83af188ba\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.600189 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flqmx\" (UniqueName: \"kubernetes.io/projected/fda084f4-b624-4036-8da8-27d83af188ba-kube-api-access-flqmx\") pod \"fda084f4-b624-4036-8da8-27d83af188ba\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.600704 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda084f4-b624-4036-8da8-27d83af188ba-serving-cert\") pod \"fda084f4-b624-4036-8da8-27d83af188ba\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.600835 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-config\") pod \"fda084f4-b624-4036-8da8-27d83af188ba\" (UID: \"fda084f4-b624-4036-8da8-27d83af188ba\") " Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.601614 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.602011 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-config" (OuterVolumeSpecName: "config") pod "fda084f4-b624-4036-8da8-27d83af188ba" (UID: "fda084f4-b624-4036-8da8-27d83af188ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.614749 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda084f4-b624-4036-8da8-27d83af188ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fda084f4-b624-4036-8da8-27d83af188ba" (UID: "fda084f4-b624-4036-8da8-27d83af188ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.617638 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda084f4-b624-4036-8da8-27d83af188ba-kube-api-access-flqmx" (OuterVolumeSpecName: "kube-api-access-flqmx") pod "fda084f4-b624-4036-8da8-27d83af188ba" (UID: "fda084f4-b624-4036-8da8-27d83af188ba"). InnerVolumeSpecName "kube-api-access-flqmx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.678210 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.702133 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-proxy-ca-bundles\") pod \"ce723dd2-6ea2-49d1-9faf-c92026630754\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.702212 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fczdw\" (UniqueName: \"kubernetes.io/projected/ce723dd2-6ea2-49d1-9faf-c92026630754-kube-api-access-fczdw\") pod \"ce723dd2-6ea2-49d1-9faf-c92026630754\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.702248 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce723dd2-6ea2-49d1-9faf-c92026630754-serving-cert\") pod \"ce723dd2-6ea2-49d1-9faf-c92026630754\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.702284 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-config\") pod \"ce723dd2-6ea2-49d1-9faf-c92026630754\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.702304 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-client-ca\") pod \"ce723dd2-6ea2-49d1-9faf-c92026630754\" (UID: \"ce723dd2-6ea2-49d1-9faf-c92026630754\") " Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.702709 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fda084f4-b624-4036-8da8-27d83af188ba-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.702726 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flqmx\" (UniqueName: \"kubernetes.io/projected/fda084f4-b624-4036-8da8-27d83af188ba-kube-api-access-flqmx\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.702735 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fda084f4-b624-4036-8da8-27d83af188ba-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.703209 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ce723dd2-6ea2-49d1-9faf-c92026630754" (UID: "ce723dd2-6ea2-49d1-9faf-c92026630754"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.703331 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce723dd2-6ea2-49d1-9faf-c92026630754" (UID: "ce723dd2-6ea2-49d1-9faf-c92026630754"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.703878 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-config" (OuterVolumeSpecName: "config") pod "ce723dd2-6ea2-49d1-9faf-c92026630754" (UID: "ce723dd2-6ea2-49d1-9faf-c92026630754"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.706618 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce723dd2-6ea2-49d1-9faf-c92026630754-kube-api-access-fczdw" (OuterVolumeSpecName: "kube-api-access-fczdw") pod "ce723dd2-6ea2-49d1-9faf-c92026630754" (UID: "ce723dd2-6ea2-49d1-9faf-c92026630754"). InnerVolumeSpecName "kube-api-access-fczdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.707572 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce723dd2-6ea2-49d1-9faf-c92026630754-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce723dd2-6ea2-49d1-9faf-c92026630754" (UID: "ce723dd2-6ea2-49d1-9faf-c92026630754"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.805237 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce723dd2-6ea2-49d1-9faf-c92026630754-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.805290 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.805304 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.805319 4746 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ce723dd2-6ea2-49d1-9faf-c92026630754-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:34 crc kubenswrapper[4746]: I1211 10:00:34.805341 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fczdw\" (UniqueName: \"kubernetes.io/projected/ce723dd2-6ea2-49d1-9faf-c92026630754-kube-api-access-fczdw\") on node \"crc\" DevicePath \"\"" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.250211 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.250190 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-26ppb" event={"ID":"ce723dd2-6ea2-49d1-9faf-c92026630754","Type":"ContainerDied","Data":"f9b4bcbb5b008e9b2aa418df7ae96d6871aa27feee08be2c4cbcde9a94ee0d0a"} Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.251389 4746 scope.go:117] "RemoveContainer" containerID="666ce9fd7b9dd4cb5863d39532feb20a4ac288eba70cd1f8db4bde8a63a12cee" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.251560 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" event={"ID":"fda084f4-b624-4036-8da8-27d83af188ba","Type":"ContainerDied","Data":"89e41111299184017949e1927ba3b04a87b8963d07a410317b00012b7b679b06"} Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.251741 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.274147 4746 scope.go:117] "RemoveContainer" containerID="c45c54b7abf7af48cb84f22474ff44540a7f1f5d16f48000e7b7515718fe0433" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.290456 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-26ppb"] Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.307021 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-26ppb"] Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.316208 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2"] Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.321165 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-fgrh2"] Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.639679 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce723dd2-6ea2-49d1-9faf-c92026630754" path="/var/lib/kubelet/pods/ce723dd2-6ea2-49d1-9faf-c92026630754/volumes" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.640473 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda084f4-b624-4036-8da8-27d83af188ba" path="/var/lib/kubelet/pods/fda084f4-b624-4036-8da8-27d83af188ba/volumes" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.641074 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9cc9dd689-gmjmh"] Dec 11 10:00:35 crc kubenswrapper[4746]: E1211 10:00:35.641396 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75cd9e37-60e9-42c2-9566-29af704fd01f" containerName="collect-profiles" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.641419 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="75cd9e37-60e9-42c2-9566-29af704fd01f" containerName="collect-profiles" Dec 11 10:00:35 crc kubenswrapper[4746]: E1211 10:00:35.641452 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce723dd2-6ea2-49d1-9faf-c92026630754" containerName="controller-manager" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.641462 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce723dd2-6ea2-49d1-9faf-c92026630754" containerName="controller-manager" Dec 11 10:00:35 crc kubenswrapper[4746]: E1211 10:00:35.641483 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda084f4-b624-4036-8da8-27d83af188ba" containerName="route-controller-manager" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.641492 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda084f4-b624-4036-8da8-27d83af188ba" containerName="route-controller-manager" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.641654 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda084f4-b624-4036-8da8-27d83af188ba" containerName="route-controller-manager" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.641694 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce723dd2-6ea2-49d1-9faf-c92026630754" containerName="controller-manager" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.641708 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="75cd9e37-60e9-42c2-9566-29af704fd01f" containerName="collect-profiles" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.642825 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9cc9dd689-gmjmh"] Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.642958 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.648564 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.648631 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.648917 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.649117 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.649247 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.650976 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.659521 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.818190 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f41d281-2f93-480f-a24e-ff33eae95c16-serving-cert\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.818282 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f41d281-2f93-480f-a24e-ff33eae95c16-config\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.818372 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f41d281-2f93-480f-a24e-ff33eae95c16-proxy-ca-bundles\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.818439 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f41d281-2f93-480f-a24e-ff33eae95c16-client-ca\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.818508 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45gtl\" (UniqueName: \"kubernetes.io/projected/2f41d281-2f93-480f-a24e-ff33eae95c16-kube-api-access-45gtl\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.920065 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45gtl\" (UniqueName: \"kubernetes.io/projected/2f41d281-2f93-480f-a24e-ff33eae95c16-kube-api-access-45gtl\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.920468 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f41d281-2f93-480f-a24e-ff33eae95c16-serving-cert\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.920516 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f41d281-2f93-480f-a24e-ff33eae95c16-config\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.920570 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f41d281-2f93-480f-a24e-ff33eae95c16-proxy-ca-bundles\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.920621 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f41d281-2f93-480f-a24e-ff33eae95c16-client-ca\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.921971 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2f41d281-2f93-480f-a24e-ff33eae95c16-client-ca\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.923279 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2f41d281-2f93-480f-a24e-ff33eae95c16-config\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.923497 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2f41d281-2f93-480f-a24e-ff33eae95c16-proxy-ca-bundles\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.928999 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2f41d281-2f93-480f-a24e-ff33eae95c16-serving-cert\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.951313 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45gtl\" (UniqueName: \"kubernetes.io/projected/2f41d281-2f93-480f-a24e-ff33eae95c16-kube-api-access-45gtl\") pod \"controller-manager-9cc9dd689-gmjmh\" (UID: \"2f41d281-2f93-480f-a24e-ff33eae95c16\") " pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:35 crc kubenswrapper[4746]: I1211 10:00:35.994974 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.104045 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn"] Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.104745 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.107604 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.107617 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.107685 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.107688 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.108350 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.108527 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.112099 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn"] Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.218556 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9cc9dd689-gmjmh"] Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.223813 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-config\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.223853 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-client-ca\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.223889 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/886e0065-1ecb-4f05-8802-71fe955a3e0c-serving-cert\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.223926 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5pq\" (UniqueName: \"kubernetes.io/projected/886e0065-1ecb-4f05-8802-71fe955a3e0c-kube-api-access-jz5pq\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.271311 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" event={"ID":"2f41d281-2f93-480f-a24e-ff33eae95c16","Type":"ContainerStarted","Data":"80081a1831060dafc6772751a2f10db3df724f0232c25698436c865f14766a6c"} Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.325527 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/886e0065-1ecb-4f05-8802-71fe955a3e0c-serving-cert\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.325632 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5pq\" (UniqueName: \"kubernetes.io/projected/886e0065-1ecb-4f05-8802-71fe955a3e0c-kube-api-access-jz5pq\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.325687 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-config\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.325716 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-client-ca\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.326959 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-client-ca\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.327346 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-config\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.330497 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/886e0065-1ecb-4f05-8802-71fe955a3e0c-serving-cert\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.346219 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5pq\" (UniqueName: \"kubernetes.io/projected/886e0065-1ecb-4f05-8802-71fe955a3e0c-kube-api-access-jz5pq\") pod \"route-controller-manager-6c595fc99c-kd9sn\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.431965 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:36 crc kubenswrapper[4746]: I1211 10:00:36.879667 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn"] Dec 11 10:00:36 crc kubenswrapper[4746]: W1211 10:00:36.888745 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod886e0065_1ecb_4f05_8802_71fe955a3e0c.slice/crio-1c1d0c4d11a0a593aa5bc5b261f5295b6c46686a31017f1d3c1941ad189fc070 WatchSource:0}: Error finding container 1c1d0c4d11a0a593aa5bc5b261f5295b6c46686a31017f1d3c1941ad189fc070: Status 404 returned error can't find the container with id 1c1d0c4d11a0a593aa5bc5b261f5295b6c46686a31017f1d3c1941ad189fc070 Dec 11 10:00:37 crc kubenswrapper[4746]: I1211 10:00:37.279391 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" event={"ID":"2f41d281-2f93-480f-a24e-ff33eae95c16","Type":"ContainerStarted","Data":"b9142a60996ca1f20707d1c7bb1222e866b61f4215aaa1f5b836ae397f69e463"} Dec 11 10:00:37 crc kubenswrapper[4746]: I1211 10:00:37.279630 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:37 crc kubenswrapper[4746]: I1211 10:00:37.281060 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" event={"ID":"886e0065-1ecb-4f05-8802-71fe955a3e0c","Type":"ContainerStarted","Data":"1c1d0c4d11a0a593aa5bc5b261f5295b6c46686a31017f1d3c1941ad189fc070"} Dec 11 10:00:37 crc kubenswrapper[4746]: I1211 10:00:37.288651 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" Dec 11 10:00:37 crc kubenswrapper[4746]: I1211 10:00:37.296675 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9cc9dd689-gmjmh" podStartSLOduration=2.296650948 podStartE2EDuration="2.296650948s" podCreationTimestamp="2025-12-11 10:00:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:00:37.296029261 +0000 UTC m=+410.155892564" watchObservedRunningTime="2025-12-11 10:00:37.296650948 +0000 UTC m=+410.156514271" Dec 11 10:00:38 crc kubenswrapper[4746]: I1211 10:00:38.289998 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" event={"ID":"886e0065-1ecb-4f05-8802-71fe955a3e0c","Type":"ContainerStarted","Data":"d3e4db4779a9794c43fb80f8d465aa2e2062bcb9fa74051c295c63bc7ba60616"} Dec 11 10:00:38 crc kubenswrapper[4746]: I1211 10:00:38.290533 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:38 crc kubenswrapper[4746]: I1211 10:00:38.299250 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:00:38 crc kubenswrapper[4746]: I1211 10:00:38.320506 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" podStartSLOduration=4.320485761 podStartE2EDuration="4.320485761s" podCreationTimestamp="2025-12-11 10:00:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:00:38.317937972 +0000 UTC m=+411.177801315" watchObservedRunningTime="2025-12-11 10:00:38.320485761 +0000 UTC m=+411.180349064" Dec 11 10:00:59 crc kubenswrapper[4746]: I1211 10:00:59.877472 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:00:59 crc kubenswrapper[4746]: I1211 10:00:59.878123 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:00:59 crc kubenswrapper[4746]: I1211 10:00:59.878182 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:00:59 crc kubenswrapper[4746]: I1211 10:00:59.878903 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2a42ad17d86d35ad64e581cff61d7570f6fe9c16ebe1b1b6377d0f2511611aed"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:00:59 crc kubenswrapper[4746]: I1211 10:00:59.878963 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://2a42ad17d86d35ad64e581cff61d7570f6fe9c16ebe1b1b6377d0f2511611aed" gracePeriod=600 Dec 11 10:01:00 crc kubenswrapper[4746]: I1211 10:01:00.437948 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="2a42ad17d86d35ad64e581cff61d7570f6fe9c16ebe1b1b6377d0f2511611aed" exitCode=0 Dec 11 10:01:00 crc kubenswrapper[4746]: I1211 10:01:00.438068 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"2a42ad17d86d35ad64e581cff61d7570f6fe9c16ebe1b1b6377d0f2511611aed"} Dec 11 10:01:00 crc kubenswrapper[4746]: I1211 10:01:00.438464 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"83dcd2da292677c5018ad8d1c74c0fb581818e298e9cb9996f2d7ffb5e3102ac"} Dec 11 10:01:00 crc kubenswrapper[4746]: I1211 10:01:00.438501 4746 scope.go:117] "RemoveContainer" containerID="09e1c4e58441d318207148b1a44f3bf3ccebb1d74dcf428d6fb588a3b46a2cc5" Dec 11 10:01:33 crc kubenswrapper[4746]: I1211 10:01:33.866331 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn"] Dec 11 10:01:33 crc kubenswrapper[4746]: I1211 10:01:33.867095 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" podUID="886e0065-1ecb-4f05-8802-71fe955a3e0c" containerName="route-controller-manager" containerID="cri-o://d3e4db4779a9794c43fb80f8d465aa2e2062bcb9fa74051c295c63bc7ba60616" gracePeriod=30 Dec 11 10:01:34 crc kubenswrapper[4746]: I1211 10:01:34.653136 4746 generic.go:334] "Generic (PLEG): container finished" podID="886e0065-1ecb-4f05-8802-71fe955a3e0c" containerID="d3e4db4779a9794c43fb80f8d465aa2e2062bcb9fa74051c295c63bc7ba60616" exitCode=0 Dec 11 10:01:34 crc kubenswrapper[4746]: I1211 10:01:34.653188 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" event={"ID":"886e0065-1ecb-4f05-8802-71fe955a3e0c","Type":"ContainerDied","Data":"d3e4db4779a9794c43fb80f8d465aa2e2062bcb9fa74051c295c63bc7ba60616"} Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.542129 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.569129 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs"] Dec 11 10:01:35 crc kubenswrapper[4746]: E1211 10:01:35.569532 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886e0065-1ecb-4f05-8802-71fe955a3e0c" containerName="route-controller-manager" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.569648 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="886e0065-1ecb-4f05-8802-71fe955a3e0c" containerName="route-controller-manager" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.569873 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="886e0065-1ecb-4f05-8802-71fe955a3e0c" containerName="route-controller-manager" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.570472 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.581647 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs"] Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.651009 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-config\") pod \"886e0065-1ecb-4f05-8802-71fe955a3e0c\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.651592 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5pq\" (UniqueName: \"kubernetes.io/projected/886e0065-1ecb-4f05-8802-71fe955a3e0c-kube-api-access-jz5pq\") pod \"886e0065-1ecb-4f05-8802-71fe955a3e0c\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.651689 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/886e0065-1ecb-4f05-8802-71fe955a3e0c-serving-cert\") pod \"886e0065-1ecb-4f05-8802-71fe955a3e0c\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.651801 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-client-ca\") pod \"886e0065-1ecb-4f05-8802-71fe955a3e0c\" (UID: \"886e0065-1ecb-4f05-8802-71fe955a3e0c\") " Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.652004 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-client-ca\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.652088 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-config" (OuterVolumeSpecName: "config") pod "886e0065-1ecb-4f05-8802-71fe955a3e0c" (UID: "886e0065-1ecb-4f05-8802-71fe955a3e0c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.652168 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-config\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.652278 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-serving-cert\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.652403 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2g4b\" (UniqueName: \"kubernetes.io/projected/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-kube-api-access-j2g4b\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.652705 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.654391 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-client-ca" (OuterVolumeSpecName: "client-ca") pod "886e0065-1ecb-4f05-8802-71fe955a3e0c" (UID: "886e0065-1ecb-4f05-8802-71fe955a3e0c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.659294 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886e0065-1ecb-4f05-8802-71fe955a3e0c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "886e0065-1ecb-4f05-8802-71fe955a3e0c" (UID: "886e0065-1ecb-4f05-8802-71fe955a3e0c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.662871 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" event={"ID":"886e0065-1ecb-4f05-8802-71fe955a3e0c","Type":"ContainerDied","Data":"1c1d0c4d11a0a593aa5bc5b261f5295b6c46686a31017f1d3c1941ad189fc070"} Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.662932 4746 scope.go:117] "RemoveContainer" containerID="d3e4db4779a9794c43fb80f8d465aa2e2062bcb9fa74051c295c63bc7ba60616" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.663092 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.669689 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886e0065-1ecb-4f05-8802-71fe955a3e0c-kube-api-access-jz5pq" (OuterVolumeSpecName: "kube-api-access-jz5pq") pod "886e0065-1ecb-4f05-8802-71fe955a3e0c" (UID: "886e0065-1ecb-4f05-8802-71fe955a3e0c"). InnerVolumeSpecName "kube-api-access-jz5pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.754097 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-serving-cert\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.754143 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2g4b\" (UniqueName: \"kubernetes.io/projected/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-kube-api-access-j2g4b\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.754203 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-client-ca\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.754255 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-config\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.754296 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5pq\" (UniqueName: \"kubernetes.io/projected/886e0065-1ecb-4f05-8802-71fe955a3e0c-kube-api-access-jz5pq\") on node \"crc\" DevicePath \"\"" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.754307 4746 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/886e0065-1ecb-4f05-8802-71fe955a3e0c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.754316 4746 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/886e0065-1ecb-4f05-8802-71fe955a3e0c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.755494 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-config\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.757407 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-client-ca\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.758779 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-serving-cert\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.773682 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2g4b\" (UniqueName: \"kubernetes.io/projected/c9c654ae-03c3-498e-a0cc-3e7758dfb4e6-kube-api-access-j2g4b\") pod \"route-controller-manager-6697976d76-rxcjs\" (UID: \"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6\") " pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:35 crc kubenswrapper[4746]: I1211 10:01:35.892829 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:36 crc kubenswrapper[4746]: I1211 10:01:36.026876 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn"] Dec 11 10:01:36 crc kubenswrapper[4746]: I1211 10:01:36.030192 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c595fc99c-kd9sn"] Dec 11 10:01:36 crc kubenswrapper[4746]: I1211 10:01:36.141851 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs"] Dec 11 10:01:36 crc kubenswrapper[4746]: I1211 10:01:36.670035 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" event={"ID":"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6","Type":"ContainerStarted","Data":"3ee2766e792561f8540d5e135fb24f2ea157393fed7c8a002474afa6ace00dda"} Dec 11 10:01:37 crc kubenswrapper[4746]: I1211 10:01:37.636498 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886e0065-1ecb-4f05-8802-71fe955a3e0c" path="/var/lib/kubelet/pods/886e0065-1ecb-4f05-8802-71fe955a3e0c/volumes" Dec 11 10:01:37 crc kubenswrapper[4746]: I1211 10:01:37.676026 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" event={"ID":"c9c654ae-03c3-498e-a0cc-3e7758dfb4e6","Type":"ContainerStarted","Data":"2d1e88cf7763fbc46fa6ed0abeb21c85f4ae6346572d1131b9ab80aac7b74b34"} Dec 11 10:01:37 crc kubenswrapper[4746]: I1211 10:01:37.676276 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:37 crc kubenswrapper[4746]: I1211 10:01:37.681725 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" Dec 11 10:01:37 crc kubenswrapper[4746]: I1211 10:01:37.698418 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6697976d76-rxcjs" podStartSLOduration=4.698399328 podStartE2EDuration="4.698399328s" podCreationTimestamp="2025-12-11 10:01:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:01:37.692771776 +0000 UTC m=+470.552635109" watchObservedRunningTime="2025-12-11 10:01:37.698399328 +0000 UTC m=+470.558262641" Dec 11 10:03:29 crc kubenswrapper[4746]: I1211 10:03:29.878288 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:03:29 crc kubenswrapper[4746]: I1211 10:03:29.878798 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.850704 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-j8m6k"] Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.852094 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-j8m6k" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.854390 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.854399 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.854556 4746 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-4mzmg" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.858028 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-thbcd"] Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.858706 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-thbcd" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.860249 4746 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-845j2" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.862327 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-j8m6k"] Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.876067 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-hww2m"] Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.876980 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-hww2m" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.879837 4746 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wrp9t" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.892953 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-thbcd"] Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.905378 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-hww2m"] Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.986628 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkvwb\" (UniqueName: \"kubernetes.io/projected/3eb397c1-a790-47d8-9b3f-93030517ef10-kube-api-access-zkvwb\") pod \"cert-manager-webhook-5655c58dd6-hww2m\" (UID: \"3eb397c1-a790-47d8-9b3f-93030517ef10\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-hww2m" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.986700 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkhpv\" (UniqueName: \"kubernetes.io/projected/421d5070-53ff-451a-bc95-3b8e966afd09-kube-api-access-wkhpv\") pod \"cert-manager-cainjector-7f985d654d-j8m6k\" (UID: \"421d5070-53ff-451a-bc95-3b8e966afd09\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-j8m6k" Dec 11 10:03:54 crc kubenswrapper[4746]: I1211 10:03:54.986726 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8l84\" (UniqueName: \"kubernetes.io/projected/c5782e13-fb8a-4d0c-b0b2-9649898453d7-kube-api-access-c8l84\") pod \"cert-manager-5b446d88c5-thbcd\" (UID: \"c5782e13-fb8a-4d0c-b0b2-9649898453d7\") " pod="cert-manager/cert-manager-5b446d88c5-thbcd" Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.088272 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkhpv\" (UniqueName: \"kubernetes.io/projected/421d5070-53ff-451a-bc95-3b8e966afd09-kube-api-access-wkhpv\") pod \"cert-manager-cainjector-7f985d654d-j8m6k\" (UID: \"421d5070-53ff-451a-bc95-3b8e966afd09\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-j8m6k" Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.088328 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8l84\" (UniqueName: \"kubernetes.io/projected/c5782e13-fb8a-4d0c-b0b2-9649898453d7-kube-api-access-c8l84\") pod \"cert-manager-5b446d88c5-thbcd\" (UID: \"c5782e13-fb8a-4d0c-b0b2-9649898453d7\") " pod="cert-manager/cert-manager-5b446d88c5-thbcd" Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.088393 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkvwb\" (UniqueName: \"kubernetes.io/projected/3eb397c1-a790-47d8-9b3f-93030517ef10-kube-api-access-zkvwb\") pod \"cert-manager-webhook-5655c58dd6-hww2m\" (UID: \"3eb397c1-a790-47d8-9b3f-93030517ef10\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-hww2m" Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.105277 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8l84\" (UniqueName: \"kubernetes.io/projected/c5782e13-fb8a-4d0c-b0b2-9649898453d7-kube-api-access-c8l84\") pod \"cert-manager-5b446d88c5-thbcd\" (UID: \"c5782e13-fb8a-4d0c-b0b2-9649898453d7\") " pod="cert-manager/cert-manager-5b446d88c5-thbcd" Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.105564 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkhpv\" (UniqueName: \"kubernetes.io/projected/421d5070-53ff-451a-bc95-3b8e966afd09-kube-api-access-wkhpv\") pod \"cert-manager-cainjector-7f985d654d-j8m6k\" (UID: \"421d5070-53ff-451a-bc95-3b8e966afd09\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-j8m6k" Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.108453 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkvwb\" (UniqueName: \"kubernetes.io/projected/3eb397c1-a790-47d8-9b3f-93030517ef10-kube-api-access-zkvwb\") pod \"cert-manager-webhook-5655c58dd6-hww2m\" (UID: \"3eb397c1-a790-47d8-9b3f-93030517ef10\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-hww2m" Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.170696 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-j8m6k" Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.181164 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-thbcd" Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.194488 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-hww2m" Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.409806 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-thbcd"] Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.421873 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.502032 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-thbcd" event={"ID":"c5782e13-fb8a-4d0c-b0b2-9649898453d7","Type":"ContainerStarted","Data":"0c5762e7c272c4c6e409e34370d314c291a9798cd22973ef3629011f375eb64f"} Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.654441 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-j8m6k"] Dec 11 10:03:55 crc kubenswrapper[4746]: W1211 10:03:55.656713 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod421d5070_53ff_451a_bc95_3b8e966afd09.slice/crio-aa5775248fabb506cb8213a86160af20cb75eb7d571a75c4565883697f585fe4 WatchSource:0}: Error finding container aa5775248fabb506cb8213a86160af20cb75eb7d571a75c4565883697f585fe4: Status 404 returned error can't find the container with id aa5775248fabb506cb8213a86160af20cb75eb7d571a75c4565883697f585fe4 Dec 11 10:03:55 crc kubenswrapper[4746]: I1211 10:03:55.658975 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-hww2m"] Dec 11 10:03:56 crc kubenswrapper[4746]: I1211 10:03:56.509023 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-hww2m" event={"ID":"3eb397c1-a790-47d8-9b3f-93030517ef10","Type":"ContainerStarted","Data":"7d05004c983aaa34e9386d568a5a745e8c5bc3a907bdf9f9da70372fb5d9b712"} Dec 11 10:03:56 crc kubenswrapper[4746]: I1211 10:03:56.511608 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-j8m6k" event={"ID":"421d5070-53ff-451a-bc95-3b8e966afd09","Type":"ContainerStarted","Data":"aa5775248fabb506cb8213a86160af20cb75eb7d571a75c4565883697f585fe4"} Dec 11 10:03:59 crc kubenswrapper[4746]: I1211 10:03:59.877663 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:03:59 crc kubenswrapper[4746]: I1211 10:03:59.878015 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:04:00 crc kubenswrapper[4746]: I1211 10:04:00.535998 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-j8m6k" event={"ID":"421d5070-53ff-451a-bc95-3b8e966afd09","Type":"ContainerStarted","Data":"073d65334c78dab5f76fe02fb1c3302ac48e2097bf928b1817dc8a145ceb009a"} Dec 11 10:04:00 crc kubenswrapper[4746]: I1211 10:04:00.544757 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-thbcd" event={"ID":"c5782e13-fb8a-4d0c-b0b2-9649898453d7","Type":"ContainerStarted","Data":"39d95925a13aabab38d9eb07b4b88a3c571c909870733c8816c42063b26547c3"} Dec 11 10:04:00 crc kubenswrapper[4746]: I1211 10:04:00.561550 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-j8m6k" podStartSLOduration=1.9774318050000002 podStartE2EDuration="6.56150628s" podCreationTimestamp="2025-12-11 10:03:54 +0000 UTC" firstStartedPulling="2025-12-11 10:03:55.659880566 +0000 UTC m=+608.519743879" lastFinishedPulling="2025-12-11 10:04:00.243955021 +0000 UTC m=+613.103818354" observedRunningTime="2025-12-11 10:04:00.556181476 +0000 UTC m=+613.416044809" watchObservedRunningTime="2025-12-11 10:04:00.56150628 +0000 UTC m=+613.421369773" Dec 11 10:04:00 crc kubenswrapper[4746]: I1211 10:04:00.579299 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-thbcd" podStartSLOduration=1.7580551359999999 podStartE2EDuration="6.57927403s" podCreationTimestamp="2025-12-11 10:03:54 +0000 UTC" firstStartedPulling="2025-12-11 10:03:55.421666369 +0000 UTC m=+608.281529682" lastFinishedPulling="2025-12-11 10:04:00.242885273 +0000 UTC m=+613.102748576" observedRunningTime="2025-12-11 10:04:00.577300576 +0000 UTC m=+613.437163929" watchObservedRunningTime="2025-12-11 10:04:00.57927403 +0000 UTC m=+613.439137363" Dec 11 10:04:01 crc kubenswrapper[4746]: I1211 10:04:01.550145 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-hww2m" event={"ID":"3eb397c1-a790-47d8-9b3f-93030517ef10","Type":"ContainerStarted","Data":"3d66b9331c7537ac53e5c9388cfd348850d9c4946169c3daa9ec34973f0713a2"} Dec 11 10:04:01 crc kubenswrapper[4746]: I1211 10:04:01.571371 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-hww2m" podStartSLOduration=2.9894104009999998 podStartE2EDuration="7.571339849s" podCreationTimestamp="2025-12-11 10:03:54 +0000 UTC" firstStartedPulling="2025-12-11 10:03:55.666955327 +0000 UTC m=+608.526818640" lastFinishedPulling="2025-12-11 10:04:00.248884775 +0000 UTC m=+613.108748088" observedRunningTime="2025-12-11 10:04:01.563305683 +0000 UTC m=+614.423169016" watchObservedRunningTime="2025-12-11 10:04:01.571339849 +0000 UTC m=+614.431203162" Dec 11 10:04:02 crc kubenswrapper[4746]: I1211 10:04:02.556742 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-hww2m" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.076515 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w2s5z"] Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.077287 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovn-controller" containerID="cri-o://8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda" gracePeriod=30 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.077329 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="nbdb" containerID="cri-o://a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045" gracePeriod=30 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.077452 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="northd" containerID="cri-o://20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503" gracePeriod=30 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.077503 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2" gracePeriod=30 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.077556 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="kube-rbac-proxy-node" containerID="cri-o://f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578" gracePeriod=30 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.077613 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovn-acl-logging" containerID="cri-o://22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2" gracePeriod=30 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.077684 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="sbdb" containerID="cri-o://0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4" gracePeriod=30 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.153447 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" containerID="cri-o://bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5" gracePeriod=30 Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.176896 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod014636cb_e768_4554_9556_460db2ebfdcb.slice/crio-f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod014636cb_e768_4554_9556_460db2ebfdcb.slice/crio-conmon-8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod014636cb_e768_4554_9556_460db2ebfdcb.slice/crio-conmon-f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.200126 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-hww2m" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.427325 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/3.log" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.429493 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovn-acl-logging/0.log" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.429996 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovn-controller/0.log" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.430997 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.487379 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-8jb9d"] Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489254 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489292 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489304 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="kubecfg-setup" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489311 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="kubecfg-setup" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489321 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="sbdb" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489326 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="sbdb" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489336 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489342 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489353 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489358 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489368 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="kube-rbac-proxy-node" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489374 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="kube-rbac-proxy-node" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489383 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="nbdb" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489390 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="nbdb" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489398 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovn-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489403 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovn-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489412 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovn-acl-logging" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489419 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovn-acl-logging" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489429 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="northd" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489434 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="northd" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489445 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489451 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489457 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489462 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489604 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="kube-rbac-proxy-ovn-metrics" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489615 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="nbdb" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489621 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="northd" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489628 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovn-acl-logging" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489635 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovn-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489643 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489651 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489672 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489680 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489690 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="sbdb" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489697 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489706 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="kube-rbac-proxy-node" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.489791 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.489798 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="014636cb-e768-4554-9556-460db2ebfdcb" containerName="ovnkube-controller" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.491469 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520528 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-slash\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520600 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-script-lib\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520653 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-env-overrides\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520747 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-netns\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520809 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-node-log\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520837 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-systemd-units\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520889 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-netd\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520915 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-config\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520935 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/014636cb-e768-4554-9556-460db2ebfdcb-ovn-node-metrics-cert\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520982 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-log-socket\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521011 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-systemd\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521063 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-openvswitch\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521092 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-var-lib-cni-networks-ovn-kubernetes\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521157 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-etc-openvswitch\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521222 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-kubelet\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521252 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49h5j\" (UniqueName: \"kubernetes.io/projected/014636cb-e768-4554-9556-460db2ebfdcb-kube-api-access-49h5j\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521325 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-var-lib-openvswitch\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521351 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-ovn-kubernetes\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521403 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-ovn\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521430 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-bin\") pod \"014636cb-e768-4554-9556-460db2ebfdcb\" (UID: \"014636cb-e768-4554-9556-460db2ebfdcb\") " Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.520710 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-slash" (OuterVolumeSpecName: "host-slash") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521211 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521363 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521439 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521480 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521509 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521945 4746 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521969 4746 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521981 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521991 4746 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-slash\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521571 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521587 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521591 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-node-log" (OuterVolumeSpecName: "node-log") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521605 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521614 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521613 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521630 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-log-socket" (OuterVolumeSpecName: "log-socket") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521635 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521733 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.521558 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.522635 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.527646 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/014636cb-e768-4554-9556-460db2ebfdcb-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.527769 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/014636cb-e768-4554-9556-460db2ebfdcb-kube-api-access-49h5j" (OuterVolumeSpecName: "kube-api-access-49h5j") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "kube-api-access-49h5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.537607 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "014636cb-e768-4554-9556-460db2ebfdcb" (UID: "014636cb-e768-4554-9556-460db2ebfdcb"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.586197 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovnkube-controller/3.log" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.588232 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovn-acl-logging/0.log" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.588671 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-w2s5z_014636cb-e768-4554-9556-460db2ebfdcb/ovn-controller/0.log" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589332 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5" exitCode=0 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589363 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4" exitCode=0 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589397 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045" exitCode=0 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589409 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503" exitCode=0 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589417 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2" exitCode=0 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589426 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578" exitCode=0 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589436 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2" exitCode=143 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589445 4746 generic.go:334] "Generic (PLEG): container finished" podID="014636cb-e768-4554-9556-460db2ebfdcb" containerID="8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda" exitCode=143 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589435 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589462 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589526 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589615 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589722 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589644 4746 scope.go:117] "RemoveContainer" containerID="bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.589946 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590107 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590148 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590229 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590247 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590307 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590364 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590380 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590395 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590464 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590479 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590552 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590583 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590688 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590708 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590723 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590782 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590803 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590818 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590882 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590903 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590918 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.590990 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591021 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591153 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591174 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591188 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591203 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591217 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591231 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591246 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591260 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591274 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591296 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2s5z" event={"ID":"014636cb-e768-4554-9556-460db2ebfdcb","Type":"ContainerDied","Data":"4ba82b00cbeb1aa1aa89f2f235878a2cd84f3232a6850548f1fbaab331adc5c0"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591321 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591339 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591354 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591368 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591385 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591399 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591413 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591419 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/2.log" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591427 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591442 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591455 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591794 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/1.log" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591823 4746 generic.go:334] "Generic (PLEG): container finished" podID="52ba00d9-b0ef-4496-a6b8-e170f405c592" containerID="d352579c684a02b9fd849e08b256a881f7ee136d38731825f95f347ab33b36a1" exitCode=2 Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591847 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r622c" event={"ID":"52ba00d9-b0ef-4496-a6b8-e170f405c592","Type":"ContainerDied","Data":"d352579c684a02b9fd849e08b256a881f7ee136d38731825f95f347ab33b36a1"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.591872 4746 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467"} Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.592212 4746 scope.go:117] "RemoveContainer" containerID="d352579c684a02b9fd849e08b256a881f7ee136d38731825f95f347ab33b36a1" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.592521 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-r622c_openshift-multus(52ba00d9-b0ef-4496-a6b8-e170f405c592)\"" pod="openshift-multus/multus-r622c" podUID="52ba00d9-b0ef-4496-a6b8-e170f405c592" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.699380 4746 scope.go:117] "RemoveContainer" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.699784 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/209d9ad0-1bae-4485-8f78-085b167a23e4-ovnkube-config\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.699834 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-etc-openvswitch\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.699862 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-run-systemd\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.699886 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.699923 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-run-ovn\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.700022 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-cni-netd\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.700044 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.700161 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/209d9ad0-1bae-4485-8f78-085b167a23e4-ovnkube-script-lib\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.700193 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-cni-bin\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.700217 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-run-netns\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.700242 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-run-openvswitch\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.700335 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-systemd-units\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.700686 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-log-socket\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701183 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxs6z\" (UniqueName: \"kubernetes.io/projected/209d9ad0-1bae-4485-8f78-085b167a23e4-kube-api-access-bxs6z\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701293 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/209d9ad0-1bae-4485-8f78-085b167a23e4-ovn-node-metrics-cert\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701370 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-kubelet\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701438 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/209d9ad0-1bae-4485-8f78-085b167a23e4-env-overrides\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701472 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-node-log\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701501 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-slash\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701677 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-var-lib-openvswitch\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701784 4746 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-node-log\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701797 4746 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701810 4746 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701821 4746 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701830 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/014636cb-e768-4554-9556-460db2ebfdcb-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701984 4746 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-log-socket\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.701998 4746 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.702008 4746 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.702017 4746 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.702027 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49h5j\" (UniqueName: \"kubernetes.io/projected/014636cb-e768-4554-9556-460db2ebfdcb-kube-api-access-49h5j\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.702040 4746 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.702066 4746 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.702075 4746 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.702084 4746 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.702096 4746 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/014636cb-e768-4554-9556-460db2ebfdcb-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.702106 4746 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/014636cb-e768-4554-9556-460db2ebfdcb-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.725281 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w2s5z"] Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.730568 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w2s5z"] Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.731144 4746 scope.go:117] "RemoveContainer" containerID="0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.751217 4746 scope.go:117] "RemoveContainer" containerID="a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.767045 4746 scope.go:117] "RemoveContainer" containerID="20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.782236 4746 scope.go:117] "RemoveContainer" containerID="2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.793301 4746 scope.go:117] "RemoveContainer" containerID="f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.802851 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-run-openvswitch\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.802886 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-systemd-units\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.802908 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-log-socket\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.802934 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxs6z\" (UniqueName: \"kubernetes.io/projected/209d9ad0-1bae-4485-8f78-085b167a23e4-kube-api-access-bxs6z\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.802959 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/209d9ad0-1bae-4485-8f78-085b167a23e4-ovn-node-metrics-cert\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.802976 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-run-openvswitch\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.802992 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-kubelet\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803272 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-systemd-units\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803269 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-kubelet\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803317 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-log-socket\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803347 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/209d9ad0-1bae-4485-8f78-085b167a23e4-env-overrides\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803442 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-node-log\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803507 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-slash\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803545 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-var-lib-openvswitch\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803593 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/209d9ad0-1bae-4485-8f78-085b167a23e4-ovnkube-config\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803616 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-slash\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803620 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-node-log\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803640 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-var-lib-openvswitch\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.803935 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/209d9ad0-1bae-4485-8f78-085b167a23e4-env-overrides\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804253 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/209d9ad0-1bae-4485-8f78-085b167a23e4-ovnkube-config\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804322 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-etc-openvswitch\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804403 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-run-systemd\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804419 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-etc-openvswitch\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804427 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804465 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-run-systemd\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804495 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804538 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-run-ovn\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804565 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-cni-netd\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804610 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-run-ovn\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804615 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804637 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-cni-netd\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804677 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-run-ovn-kubernetes\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804696 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/209d9ad0-1bae-4485-8f78-085b167a23e4-ovnkube-script-lib\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804742 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-cni-bin\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804820 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-run-netns\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.804869 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-cni-bin\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.805713 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/209d9ad0-1bae-4485-8f78-085b167a23e4-host-run-netns\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.806272 4746 scope.go:117] "RemoveContainer" containerID="22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.806888 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/209d9ad0-1bae-4485-8f78-085b167a23e4-ovn-node-metrics-cert\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.807816 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/209d9ad0-1bae-4485-8f78-085b167a23e4-ovnkube-script-lib\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.819457 4746 scope.go:117] "RemoveContainer" containerID="8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.821155 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxs6z\" (UniqueName: \"kubernetes.io/projected/209d9ad0-1bae-4485-8f78-085b167a23e4-kube-api-access-bxs6z\") pod \"ovnkube-node-8jb9d\" (UID: \"209d9ad0-1bae-4485-8f78-085b167a23e4\") " pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.835196 4746 scope.go:117] "RemoveContainer" containerID="ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.849450 4746 scope.go:117] "RemoveContainer" containerID="bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.849902 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": container with ID starting with bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5 not found: ID does not exist" containerID="bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.849976 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5"} err="failed to get container status \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": rpc error: code = NotFound desc = could not find container \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": container with ID starting with bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.850007 4746 scope.go:117] "RemoveContainer" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.850306 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\": container with ID starting with 3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab not found: ID does not exist" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.850332 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab"} err="failed to get container status \"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\": rpc error: code = NotFound desc = could not find container \"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\": container with ID starting with 3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.850346 4746 scope.go:117] "RemoveContainer" containerID="0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.850677 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\": container with ID starting with 0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4 not found: ID does not exist" containerID="0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.850729 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4"} err="failed to get container status \"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\": rpc error: code = NotFound desc = could not find container \"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\": container with ID starting with 0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.850764 4746 scope.go:117] "RemoveContainer" containerID="a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.851135 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\": container with ID starting with a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045 not found: ID does not exist" containerID="a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.851158 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045"} err="failed to get container status \"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\": rpc error: code = NotFound desc = could not find container \"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\": container with ID starting with a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.851172 4746 scope.go:117] "RemoveContainer" containerID="20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.851383 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\": container with ID starting with 20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503 not found: ID does not exist" containerID="20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.851406 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503"} err="failed to get container status \"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\": rpc error: code = NotFound desc = could not find container \"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\": container with ID starting with 20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.851417 4746 scope.go:117] "RemoveContainer" containerID="2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.851632 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\": container with ID starting with 2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2 not found: ID does not exist" containerID="2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.851653 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2"} err="failed to get container status \"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\": rpc error: code = NotFound desc = could not find container \"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\": container with ID starting with 2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.851664 4746 scope.go:117] "RemoveContainer" containerID="f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.851877 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\": container with ID starting with f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578 not found: ID does not exist" containerID="f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.851896 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578"} err="failed to get container status \"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\": rpc error: code = NotFound desc = could not find container \"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\": container with ID starting with f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.851907 4746 scope.go:117] "RemoveContainer" containerID="22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.852138 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\": container with ID starting with 22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2 not found: ID does not exist" containerID="22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.852156 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2"} err="failed to get container status \"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\": rpc error: code = NotFound desc = could not find container \"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\": container with ID starting with 22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.852174 4746 scope.go:117] "RemoveContainer" containerID="8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.852352 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\": container with ID starting with 8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda not found: ID does not exist" containerID="8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.852378 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda"} err="failed to get container status \"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\": rpc error: code = NotFound desc = could not find container \"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\": container with ID starting with 8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.852396 4746 scope.go:117] "RemoveContainer" containerID="ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0" Dec 11 10:04:05 crc kubenswrapper[4746]: E1211 10:04:05.852587 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\": container with ID starting with ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0 not found: ID does not exist" containerID="ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.852611 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0"} err="failed to get container status \"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\": rpc error: code = NotFound desc = could not find container \"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\": container with ID starting with ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.852623 4746 scope.go:117] "RemoveContainer" containerID="bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.852852 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5"} err="failed to get container status \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": rpc error: code = NotFound desc = could not find container \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": container with ID starting with bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.852868 4746 scope.go:117] "RemoveContainer" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.853071 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab"} err="failed to get container status \"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\": rpc error: code = NotFound desc = could not find container \"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\": container with ID starting with 3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.853086 4746 scope.go:117] "RemoveContainer" containerID="0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.853533 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4"} err="failed to get container status \"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\": rpc error: code = NotFound desc = could not find container \"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\": container with ID starting with 0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.853551 4746 scope.go:117] "RemoveContainer" containerID="a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.853729 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045"} err="failed to get container status \"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\": rpc error: code = NotFound desc = could not find container \"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\": container with ID starting with a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.853748 4746 scope.go:117] "RemoveContainer" containerID="20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.853904 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503"} err="failed to get container status \"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\": rpc error: code = NotFound desc = could not find container \"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\": container with ID starting with 20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.853922 4746 scope.go:117] "RemoveContainer" containerID="2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.854134 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2"} err="failed to get container status \"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\": rpc error: code = NotFound desc = could not find container \"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\": container with ID starting with 2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.854154 4746 scope.go:117] "RemoveContainer" containerID="f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.854314 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578"} err="failed to get container status \"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\": rpc error: code = NotFound desc = could not find container \"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\": container with ID starting with f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.854333 4746 scope.go:117] "RemoveContainer" containerID="22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.854564 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2"} err="failed to get container status \"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\": rpc error: code = NotFound desc = could not find container \"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\": container with ID starting with 22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.854582 4746 scope.go:117] "RemoveContainer" containerID="8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.854836 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda"} err="failed to get container status \"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\": rpc error: code = NotFound desc = could not find container \"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\": container with ID starting with 8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.854860 4746 scope.go:117] "RemoveContainer" containerID="ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.855026 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0"} err="failed to get container status \"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\": rpc error: code = NotFound desc = could not find container \"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\": container with ID starting with ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.855049 4746 scope.go:117] "RemoveContainer" containerID="bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.855304 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5"} err="failed to get container status \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": rpc error: code = NotFound desc = could not find container \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": container with ID starting with bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.855327 4746 scope.go:117] "RemoveContainer" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.855542 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab"} err="failed to get container status \"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\": rpc error: code = NotFound desc = could not find container \"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\": container with ID starting with 3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.855559 4746 scope.go:117] "RemoveContainer" containerID="0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.855725 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4"} err="failed to get container status \"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\": rpc error: code = NotFound desc = could not find container \"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\": container with ID starting with 0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.855747 4746 scope.go:117] "RemoveContainer" containerID="a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.855958 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045"} err="failed to get container status \"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\": rpc error: code = NotFound desc = could not find container \"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\": container with ID starting with a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.855976 4746 scope.go:117] "RemoveContainer" containerID="20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.856237 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503"} err="failed to get container status \"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\": rpc error: code = NotFound desc = could not find container \"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\": container with ID starting with 20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.856254 4746 scope.go:117] "RemoveContainer" containerID="2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.856420 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2"} err="failed to get container status \"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\": rpc error: code = NotFound desc = could not find container \"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\": container with ID starting with 2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.856435 4746 scope.go:117] "RemoveContainer" containerID="f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.856664 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578"} err="failed to get container status \"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\": rpc error: code = NotFound desc = could not find container \"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\": container with ID starting with f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.856682 4746 scope.go:117] "RemoveContainer" containerID="22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.856895 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2"} err="failed to get container status \"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\": rpc error: code = NotFound desc = could not find container \"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\": container with ID starting with 22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.856913 4746 scope.go:117] "RemoveContainer" containerID="8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.857124 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda"} err="failed to get container status \"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\": rpc error: code = NotFound desc = could not find container \"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\": container with ID starting with 8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.857146 4746 scope.go:117] "RemoveContainer" containerID="ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.857409 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0"} err="failed to get container status \"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\": rpc error: code = NotFound desc = could not find container \"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\": container with ID starting with ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.857434 4746 scope.go:117] "RemoveContainer" containerID="bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.857626 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5"} err="failed to get container status \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": rpc error: code = NotFound desc = could not find container \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": container with ID starting with bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.857657 4746 scope.go:117] "RemoveContainer" containerID="3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.857873 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab"} err="failed to get container status \"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\": rpc error: code = NotFound desc = could not find container \"3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab\": container with ID starting with 3d0bdad98dfe3fc2afc673f8eb97098dd74490ff6b9d1fdde67a381aab752cab not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.857892 4746 scope.go:117] "RemoveContainer" containerID="0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.858079 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4"} err="failed to get container status \"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\": rpc error: code = NotFound desc = could not find container \"0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4\": container with ID starting with 0d14bc3f9f20d09d44390ed90aff82b31555371b30755f88fea9abbea6503ba4 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.858096 4746 scope.go:117] "RemoveContainer" containerID="a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.858327 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045"} err="failed to get container status \"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\": rpc error: code = NotFound desc = could not find container \"a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045\": container with ID starting with a9070c09fa62becc8e9194cdd7b2bd572b836d1c6f427179e74a0a6eb46a3045 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.858347 4746 scope.go:117] "RemoveContainer" containerID="20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.858472 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503"} err="failed to get container status \"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\": rpc error: code = NotFound desc = could not find container \"20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503\": container with ID starting with 20d5d39a0b8f32dfea1480772ebc2e646a5daa25e9c570bc699a6e2b48eab503 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.858488 4746 scope.go:117] "RemoveContainer" containerID="2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.858733 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2"} err="failed to get container status \"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\": rpc error: code = NotFound desc = could not find container \"2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2\": container with ID starting with 2cfd4f67695500b7c9e3984804815c777c4ec67b2a806efbc07e47ed5d2a7da2 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.858755 4746 scope.go:117] "RemoveContainer" containerID="f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.858903 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578"} err="failed to get container status \"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\": rpc error: code = NotFound desc = could not find container \"f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578\": container with ID starting with f970d00bebcf511146cc632ed13b9f0ee3818423ef3914896790f9098193c578 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.858921 4746 scope.go:117] "RemoveContainer" containerID="22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.859089 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2"} err="failed to get container status \"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\": rpc error: code = NotFound desc = could not find container \"22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2\": container with ID starting with 22fb03544b8638d3bd12446e300499d5761525ebe3564533e4ca815cf093d1d2 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.859107 4746 scope.go:117] "RemoveContainer" containerID="8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.859456 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda"} err="failed to get container status \"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\": rpc error: code = NotFound desc = could not find container \"8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda\": container with ID starting with 8fa684ab6f259f08b57c9a39fbcd41aa730caf6c4e5343c36ff2745547fdbdda not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.859477 4746 scope.go:117] "RemoveContainer" containerID="ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.859793 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0"} err="failed to get container status \"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\": rpc error: code = NotFound desc = could not find container \"ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0\": container with ID starting with ac2e52648f2ae5e144ea78a7d0e0172206bad7363640aeca3ee3a04db8536ab0 not found: ID does not exist" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.859913 4746 scope.go:117] "RemoveContainer" containerID="bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5" Dec 11 10:04:05 crc kubenswrapper[4746]: I1211 10:04:05.860302 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5"} err="failed to get container status \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": rpc error: code = NotFound desc = could not find container \"bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5\": container with ID starting with bf8d0b52687143fd230403ce8f7e426e96b56848f361760f6571c4baa956c9a5 not found: ID does not exist" Dec 11 10:04:06 crc kubenswrapper[4746]: I1211 10:04:06.103990 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:06 crc kubenswrapper[4746]: I1211 10:04:06.600376 4746 generic.go:334] "Generic (PLEG): container finished" podID="209d9ad0-1bae-4485-8f78-085b167a23e4" containerID="3650aba36eb4a804571bdafce6a2401c96dcee8f89da31033d3a52ab41b3ac03" exitCode=0 Dec 11 10:04:06 crc kubenswrapper[4746]: I1211 10:04:06.600460 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" event={"ID":"209d9ad0-1bae-4485-8f78-085b167a23e4","Type":"ContainerDied","Data":"3650aba36eb4a804571bdafce6a2401c96dcee8f89da31033d3a52ab41b3ac03"} Dec 11 10:04:06 crc kubenswrapper[4746]: I1211 10:04:06.600549 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" event={"ID":"209d9ad0-1bae-4485-8f78-085b167a23e4","Type":"ContainerStarted","Data":"4e82d1d687366f6528e96a8c1c5946d636706292b735830b88dc5763f843900d"} Dec 11 10:04:07 crc kubenswrapper[4746]: I1211 10:04:07.608509 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" event={"ID":"209d9ad0-1bae-4485-8f78-085b167a23e4","Type":"ContainerStarted","Data":"635bfad2dc06844a2caf4b0e807d10297f40e7d9cbfc8b95ee12c4df2e6c1bf8"} Dec 11 10:04:07 crc kubenswrapper[4746]: I1211 10:04:07.638893 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="014636cb-e768-4554-9556-460db2ebfdcb" path="/var/lib/kubelet/pods/014636cb-e768-4554-9556-460db2ebfdcb/volumes" Dec 11 10:04:08 crc kubenswrapper[4746]: I1211 10:04:08.616988 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" event={"ID":"209d9ad0-1bae-4485-8f78-085b167a23e4","Type":"ContainerStarted","Data":"6575a3224e31474aabb7f3364d1211ecdadd254ca402bdac41cfd2743c8c96ab"} Dec 11 10:04:08 crc kubenswrapper[4746]: I1211 10:04:08.617461 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" event={"ID":"209d9ad0-1bae-4485-8f78-085b167a23e4","Type":"ContainerStarted","Data":"c63fcd5203e159be131151be22abc86c2ab5cd2cef11097f3b41a32607221025"} Dec 11 10:04:08 crc kubenswrapper[4746]: I1211 10:04:08.617484 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" event={"ID":"209d9ad0-1bae-4485-8f78-085b167a23e4","Type":"ContainerStarted","Data":"110713c3b3872893e4fdb9eaf5c4f24a23fd060d3dea0a2cde5c57a9307b6a71"} Dec 11 10:04:09 crc kubenswrapper[4746]: I1211 10:04:09.625945 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" event={"ID":"209d9ad0-1bae-4485-8f78-085b167a23e4","Type":"ContainerStarted","Data":"f9ffb8a8316777d022fcd84928028676fc2d204047e93e6a1d3f2aa0e4bba136"} Dec 11 10:04:09 crc kubenswrapper[4746]: I1211 10:04:09.625984 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" event={"ID":"209d9ad0-1bae-4485-8f78-085b167a23e4","Type":"ContainerStarted","Data":"ac089aec69fee658ccdfe22a23b8cf036e4e25b2ebc6e623a74777143cfe2096"} Dec 11 10:04:11 crc kubenswrapper[4746]: I1211 10:04:11.641254 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" event={"ID":"209d9ad0-1bae-4485-8f78-085b167a23e4","Type":"ContainerStarted","Data":"3a8ff69c6dd4e0af8cb4b8be9a0076c79800b257da8156c1de486b8e2d0c411a"} Dec 11 10:04:13 crc kubenswrapper[4746]: I1211 10:04:13.716606 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" event={"ID":"209d9ad0-1bae-4485-8f78-085b167a23e4","Type":"ContainerStarted","Data":"c59bd95019d7feaac129790e94ac7e31091f8ab82ced6952291597ca181c96dc"} Dec 11 10:04:13 crc kubenswrapper[4746]: I1211 10:04:13.717244 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:13 crc kubenswrapper[4746]: I1211 10:04:13.717260 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:13 crc kubenswrapper[4746]: I1211 10:04:13.717272 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:13 crc kubenswrapper[4746]: I1211 10:04:13.739417 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:13 crc kubenswrapper[4746]: I1211 10:04:13.745389 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:13 crc kubenswrapper[4746]: I1211 10:04:13.751570 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" podStartSLOduration=8.751554045 podStartE2EDuration="8.751554045s" podCreationTimestamp="2025-12-11 10:04:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:04:13.74989695 +0000 UTC m=+626.609760293" watchObservedRunningTime="2025-12-11 10:04:13.751554045 +0000 UTC m=+626.611417358" Dec 11 10:04:16 crc kubenswrapper[4746]: I1211 10:04:16.630870 4746 scope.go:117] "RemoveContainer" containerID="d352579c684a02b9fd849e08b256a881f7ee136d38731825f95f347ab33b36a1" Dec 11 10:04:16 crc kubenswrapper[4746]: E1211 10:04:16.632135 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-r622c_openshift-multus(52ba00d9-b0ef-4496-a6b8-e170f405c592)\"" pod="openshift-multus/multus-r622c" podUID="52ba00d9-b0ef-4496-a6b8-e170f405c592" Dec 11 10:04:29 crc kubenswrapper[4746]: I1211 10:04:29.877803 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:04:29 crc kubenswrapper[4746]: I1211 10:04:29.880618 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:04:29 crc kubenswrapper[4746]: I1211 10:04:29.880894 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:04:29 crc kubenswrapper[4746]: I1211 10:04:29.881692 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"83dcd2da292677c5018ad8d1c74c0fb581818e298e9cb9996f2d7ffb5e3102ac"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:04:29 crc kubenswrapper[4746]: I1211 10:04:29.881840 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://83dcd2da292677c5018ad8d1c74c0fb581818e298e9cb9996f2d7ffb5e3102ac" gracePeriod=600 Dec 11 10:04:30 crc kubenswrapper[4746]: I1211 10:04:30.630110 4746 scope.go:117] "RemoveContainer" containerID="d352579c684a02b9fd849e08b256a881f7ee136d38731825f95f347ab33b36a1" Dec 11 10:04:30 crc kubenswrapper[4746]: I1211 10:04:30.819335 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="83dcd2da292677c5018ad8d1c74c0fb581818e298e9cb9996f2d7ffb5e3102ac" exitCode=0 Dec 11 10:04:30 crc kubenswrapper[4746]: I1211 10:04:30.819383 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"83dcd2da292677c5018ad8d1c74c0fb581818e298e9cb9996f2d7ffb5e3102ac"} Dec 11 10:04:30 crc kubenswrapper[4746]: I1211 10:04:30.819426 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"6d9fcc5a422995e470abf6d012848f503e289189e1935c89236af0c3efd0b192"} Dec 11 10:04:30 crc kubenswrapper[4746]: I1211 10:04:30.819454 4746 scope.go:117] "RemoveContainer" containerID="2a42ad17d86d35ad64e581cff61d7570f6fe9c16ebe1b1b6377d0f2511611aed" Dec 11 10:04:31 crc kubenswrapper[4746]: I1211 10:04:31.831298 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/2.log" Dec 11 10:04:31 crc kubenswrapper[4746]: I1211 10:04:31.832441 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/1.log" Dec 11 10:04:31 crc kubenswrapper[4746]: I1211 10:04:31.832483 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-r622c" event={"ID":"52ba00d9-b0ef-4496-a6b8-e170f405c592","Type":"ContainerStarted","Data":"4c0a1105ea458f8f58c61aede3fbf5b0f03ef059348aa1650ca5355efcd063ec"} Dec 11 10:04:36 crc kubenswrapper[4746]: I1211 10:04:36.147238 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-8jb9d" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.535771 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777"] Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.537542 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.543279 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.545458 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777"] Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.646395 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbcp2\" (UniqueName: \"kubernetes.io/projected/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-kube-api-access-fbcp2\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.646569 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.646898 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.748908 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbcp2\" (UniqueName: \"kubernetes.io/projected/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-kube-api-access-fbcp2\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.749388 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.749718 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.750873 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-util\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.751721 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-bundle\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.780322 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbcp2\" (UniqueName: \"kubernetes.io/projected/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-kube-api-access-fbcp2\") pod \"98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:45 crc kubenswrapper[4746]: I1211 10:04:45.852920 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:46 crc kubenswrapper[4746]: I1211 10:04:46.044975 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777"] Dec 11 10:04:46 crc kubenswrapper[4746]: I1211 10:04:46.961841 4746 generic.go:334] "Generic (PLEG): container finished" podID="42a9ddfe-247e-4cae-ab22-03c2e0b4a494" containerID="f6ddd17773fd1cb9b709a0064b66911f50e54a9a609017d4e64e1aa1df4b53fd" exitCode=0 Dec 11 10:04:46 crc kubenswrapper[4746]: I1211 10:04:46.961936 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" event={"ID":"42a9ddfe-247e-4cae-ab22-03c2e0b4a494","Type":"ContainerDied","Data":"f6ddd17773fd1cb9b709a0064b66911f50e54a9a609017d4e64e1aa1df4b53fd"} Dec 11 10:04:46 crc kubenswrapper[4746]: I1211 10:04:46.962175 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" event={"ID":"42a9ddfe-247e-4cae-ab22-03c2e0b4a494","Type":"ContainerStarted","Data":"2bdbb3702c86e794cce54f5cd5020c74d3f556ad68a902ac1f3ab7a8dfa60a18"} Dec 11 10:04:48 crc kubenswrapper[4746]: I1211 10:04:48.974094 4746 generic.go:334] "Generic (PLEG): container finished" podID="42a9ddfe-247e-4cae-ab22-03c2e0b4a494" containerID="5bdaec3f7cad8c7408fa4bd7cb004f80b17647e06e7f61a5ade926c3cf9c36d0" exitCode=0 Dec 11 10:04:48 crc kubenswrapper[4746]: I1211 10:04:48.974138 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" event={"ID":"42a9ddfe-247e-4cae-ab22-03c2e0b4a494","Type":"ContainerDied","Data":"5bdaec3f7cad8c7408fa4bd7cb004f80b17647e06e7f61a5ade926c3cf9c36d0"} Dec 11 10:04:49 crc kubenswrapper[4746]: I1211 10:04:49.986823 4746 generic.go:334] "Generic (PLEG): container finished" podID="42a9ddfe-247e-4cae-ab22-03c2e0b4a494" containerID="342314e67c6649c41d563141a5bf285f1129e6140eb1f89e0a5bcf0767a06e4e" exitCode=0 Dec 11 10:04:49 crc kubenswrapper[4746]: I1211 10:04:49.986906 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" event={"ID":"42a9ddfe-247e-4cae-ab22-03c2e0b4a494","Type":"ContainerDied","Data":"342314e67c6649c41d563141a5bf285f1129e6140eb1f89e0a5bcf0767a06e4e"} Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.195360 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.315043 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-bundle\") pod \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.315115 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-util\") pod \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.315158 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbcp2\" (UniqueName: \"kubernetes.io/projected/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-kube-api-access-fbcp2\") pod \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\" (UID: \"42a9ddfe-247e-4cae-ab22-03c2e0b4a494\") " Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.316651 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-bundle" (OuterVolumeSpecName: "bundle") pod "42a9ddfe-247e-4cae-ab22-03c2e0b4a494" (UID: "42a9ddfe-247e-4cae-ab22-03c2e0b4a494"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.321218 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-kube-api-access-fbcp2" (OuterVolumeSpecName: "kube-api-access-fbcp2") pod "42a9ddfe-247e-4cae-ab22-03c2e0b4a494" (UID: "42a9ddfe-247e-4cae-ab22-03c2e0b4a494"). InnerVolumeSpecName "kube-api-access-fbcp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.329593 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-util" (OuterVolumeSpecName: "util") pod "42a9ddfe-247e-4cae-ab22-03c2e0b4a494" (UID: "42a9ddfe-247e-4cae-ab22-03c2e0b4a494"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.367872 4746 scope.go:117] "RemoveContainer" containerID="2e317f8ae14d2ec2bb140d6293e5de8f1b9f1403d8ec9e68da06711aa5c8e467" Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.416857 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.416885 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-util\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:51 crc kubenswrapper[4746]: I1211 10:04:51.416894 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbcp2\" (UniqueName: \"kubernetes.io/projected/42a9ddfe-247e-4cae-ab22-03c2e0b4a494-kube-api-access-fbcp2\") on node \"crc\" DevicePath \"\"" Dec 11 10:04:52 crc kubenswrapper[4746]: I1211 10:04:52.000358 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-r622c_52ba00d9-b0ef-4496-a6b8-e170f405c592/kube-multus/2.log" Dec 11 10:04:52 crc kubenswrapper[4746]: I1211 10:04:52.002927 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" event={"ID":"42a9ddfe-247e-4cae-ab22-03c2e0b4a494","Type":"ContainerDied","Data":"2bdbb3702c86e794cce54f5cd5020c74d3f556ad68a902ac1f3ab7a8dfa60a18"} Dec 11 10:04:52 crc kubenswrapper[4746]: I1211 10:04:52.002963 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bdbb3702c86e794cce54f5cd5020c74d3f556ad68a902ac1f3ab7a8dfa60a18" Dec 11 10:04:52 crc kubenswrapper[4746]: I1211 10:04:52.003159 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777" Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.822934 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-dln6r"] Dec 11 10:04:53 crc kubenswrapper[4746]: E1211 10:04:53.823905 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a9ddfe-247e-4cae-ab22-03c2e0b4a494" containerName="pull" Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.823929 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a9ddfe-247e-4cae-ab22-03c2e0b4a494" containerName="pull" Dec 11 10:04:53 crc kubenswrapper[4746]: E1211 10:04:53.823953 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a9ddfe-247e-4cae-ab22-03c2e0b4a494" containerName="util" Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.823962 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a9ddfe-247e-4cae-ab22-03c2e0b4a494" containerName="util" Dec 11 10:04:53 crc kubenswrapper[4746]: E1211 10:04:53.823976 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42a9ddfe-247e-4cae-ab22-03c2e0b4a494" containerName="extract" Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.823984 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="42a9ddfe-247e-4cae-ab22-03c2e0b4a494" containerName="extract" Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.824182 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="42a9ddfe-247e-4cae-ab22-03c2e0b4a494" containerName="extract" Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.824824 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-dln6r" Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.826532 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.827108 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.827755 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-pxnv2" Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.837628 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-dln6r"] Dec 11 10:04:53 crc kubenswrapper[4746]: I1211 10:04:53.956530 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqhjb\" (UniqueName: \"kubernetes.io/projected/f3af9a18-e9fe-429b-988a-4289790515b6-kube-api-access-hqhjb\") pod \"nmstate-operator-6769fb99d-dln6r\" (UID: \"f3af9a18-e9fe-429b-988a-4289790515b6\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-dln6r" Dec 11 10:04:54 crc kubenswrapper[4746]: I1211 10:04:54.057740 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqhjb\" (UniqueName: \"kubernetes.io/projected/f3af9a18-e9fe-429b-988a-4289790515b6-kube-api-access-hqhjb\") pod \"nmstate-operator-6769fb99d-dln6r\" (UID: \"f3af9a18-e9fe-429b-988a-4289790515b6\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-dln6r" Dec 11 10:04:54 crc kubenswrapper[4746]: I1211 10:04:54.079831 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqhjb\" (UniqueName: \"kubernetes.io/projected/f3af9a18-e9fe-429b-988a-4289790515b6-kube-api-access-hqhjb\") pod \"nmstate-operator-6769fb99d-dln6r\" (UID: \"f3af9a18-e9fe-429b-988a-4289790515b6\") " pod="openshift-nmstate/nmstate-operator-6769fb99d-dln6r" Dec 11 10:04:54 crc kubenswrapper[4746]: I1211 10:04:54.140499 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-6769fb99d-dln6r" Dec 11 10:04:54 crc kubenswrapper[4746]: I1211 10:04:54.347937 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-6769fb99d-dln6r"] Dec 11 10:04:55 crc kubenswrapper[4746]: I1211 10:04:55.018233 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-dln6r" event={"ID":"f3af9a18-e9fe-429b-988a-4289790515b6","Type":"ContainerStarted","Data":"0f776a99d3d7e8772981bb13292e29879196e9ed5eaf54e6cc8a166075799bc6"} Dec 11 10:04:57 crc kubenswrapper[4746]: I1211 10:04:57.030817 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-6769fb99d-dln6r" event={"ID":"f3af9a18-e9fe-429b-988a-4289790515b6","Type":"ContainerStarted","Data":"b06b13d81dbff06b694dc0b8e15f4a44dc0527b154ea641bfacba65a4c9e6baa"} Dec 11 10:04:57 crc kubenswrapper[4746]: I1211 10:04:57.045510 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-6769fb99d-dln6r" podStartSLOduration=1.683879325 podStartE2EDuration="4.045492634s" podCreationTimestamp="2025-12-11 10:04:53 +0000 UTC" firstStartedPulling="2025-12-11 10:04:54.356991795 +0000 UTC m=+667.216855108" lastFinishedPulling="2025-12-11 10:04:56.718605104 +0000 UTC m=+669.578468417" observedRunningTime="2025-12-11 10:04:57.04463562 +0000 UTC m=+669.904498933" watchObservedRunningTime="2025-12-11 10:04:57.045492634 +0000 UTC m=+669.905355947" Dec 11 10:04:57 crc kubenswrapper[4746]: I1211 10:04:57.986541 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm"] Dec 11 10:04:57 crc kubenswrapper[4746]: I1211 10:04:57.987644 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm" Dec 11 10:04:57 crc kubenswrapper[4746]: I1211 10:04:57.992266 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-qh2jd" Dec 11 10:04:57 crc kubenswrapper[4746]: I1211 10:04:57.997090 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4"] Dec 11 10:04:57 crc kubenswrapper[4746]: I1211 10:04:57.998137 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.001096 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.008872 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm"] Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.022659 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9kf4z"] Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.023497 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.034859 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4"] Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.107943 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9b209fd0-9f8c-4608-99df-7c691450b004-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-vt7v4\" (UID: \"9b209fd0-9f8c-4608-99df-7c691450b004\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.108016 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/54a843e2-1db9-49db-89e5-5254b7b50bab-ovs-socket\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.108098 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/54a843e2-1db9-49db-89e5-5254b7b50bab-nmstate-lock\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.108127 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwm86\" (UniqueName: \"kubernetes.io/projected/a911ba40-1cb3-4447-8f86-b03341052ae8-kube-api-access-pwm86\") pod \"nmstate-metrics-7f7f7578db-2sdgm\" (UID: \"a911ba40-1cb3-4447-8f86-b03341052ae8\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.108173 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcc4z\" (UniqueName: \"kubernetes.io/projected/9b209fd0-9f8c-4608-99df-7c691450b004-kube-api-access-hcc4z\") pod \"nmstate-webhook-f8fb84555-vt7v4\" (UID: \"9b209fd0-9f8c-4608-99df-7c691450b004\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.108194 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkpg2\" (UniqueName: \"kubernetes.io/projected/54a843e2-1db9-49db-89e5-5254b7b50bab-kube-api-access-vkpg2\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.108217 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/54a843e2-1db9-49db-89e5-5254b7b50bab-dbus-socket\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.140954 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7"] Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.141786 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.147783 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.148167 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-lwz7n" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.151981 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.152385 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7"] Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.208939 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/54a843e2-1db9-49db-89e5-5254b7b50bab-nmstate-lock\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.208996 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwm86\" (UniqueName: \"kubernetes.io/projected/a911ba40-1cb3-4447-8f86-b03341052ae8-kube-api-access-pwm86\") pod \"nmstate-metrics-7f7f7578db-2sdgm\" (UID: \"a911ba40-1cb3-4447-8f86-b03341052ae8\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209028 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-rzgk7\" (UID: \"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209074 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/54a843e2-1db9-49db-89e5-5254b7b50bab-nmstate-lock\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209101 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcc4z\" (UniqueName: \"kubernetes.io/projected/9b209fd0-9f8c-4608-99df-7c691450b004-kube-api-access-hcc4z\") pod \"nmstate-webhook-f8fb84555-vt7v4\" (UID: \"9b209fd0-9f8c-4608-99df-7c691450b004\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209143 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkpg2\" (UniqueName: \"kubernetes.io/projected/54a843e2-1db9-49db-89e5-5254b7b50bab-kube-api-access-vkpg2\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209174 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/54a843e2-1db9-49db-89e5-5254b7b50bab-dbus-socket\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209197 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9b209fd0-9f8c-4608-99df-7c691450b004-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-vt7v4\" (UID: \"9b209fd0-9f8c-4608-99df-7c691450b004\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209225 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-rzgk7\" (UID: \"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209260 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/54a843e2-1db9-49db-89e5-5254b7b50bab-ovs-socket\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209286 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lq4t\" (UniqueName: \"kubernetes.io/projected/7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f-kube-api-access-7lq4t\") pod \"nmstate-console-plugin-6ff7998486-rzgk7\" (UID: \"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: E1211 10:04:58.209419 4746 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209472 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/54a843e2-1db9-49db-89e5-5254b7b50bab-ovs-socket\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: E1211 10:04:58.209484 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9b209fd0-9f8c-4608-99df-7c691450b004-tls-key-pair podName:9b209fd0-9f8c-4608-99df-7c691450b004 nodeName:}" failed. No retries permitted until 2025-12-11 10:04:58.709464622 +0000 UTC m=+671.569327935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/9b209fd0-9f8c-4608-99df-7c691450b004-tls-key-pair") pod "nmstate-webhook-f8fb84555-vt7v4" (UID: "9b209fd0-9f8c-4608-99df-7c691450b004") : secret "openshift-nmstate-webhook" not found Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.209654 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/54a843e2-1db9-49db-89e5-5254b7b50bab-dbus-socket\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.227842 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcc4z\" (UniqueName: \"kubernetes.io/projected/9b209fd0-9f8c-4608-99df-7c691450b004-kube-api-access-hcc4z\") pod \"nmstate-webhook-f8fb84555-vt7v4\" (UID: \"9b209fd0-9f8c-4608-99df-7c691450b004\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.231689 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkpg2\" (UniqueName: \"kubernetes.io/projected/54a843e2-1db9-49db-89e5-5254b7b50bab-kube-api-access-vkpg2\") pod \"nmstate-handler-9kf4z\" (UID: \"54a843e2-1db9-49db-89e5-5254b7b50bab\") " pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.235615 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwm86\" (UniqueName: \"kubernetes.io/projected/a911ba40-1cb3-4447-8f86-b03341052ae8-kube-api-access-pwm86\") pod \"nmstate-metrics-7f7f7578db-2sdgm\" (UID: \"a911ba40-1cb3-4447-8f86-b03341052ae8\") " pod="openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.307019 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.310838 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-rzgk7\" (UID: \"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.310889 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lq4t\" (UniqueName: \"kubernetes.io/projected/7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f-kube-api-access-7lq4t\") pod \"nmstate-console-plugin-6ff7998486-rzgk7\" (UID: \"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.310911 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-rzgk7\" (UID: \"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.311818 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f-nginx-conf\") pod \"nmstate-console-plugin-6ff7998486-rzgk7\" (UID: \"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.316527 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f-plugin-serving-cert\") pod \"nmstate-console-plugin-6ff7998486-rzgk7\" (UID: \"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.341755 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lq4t\" (UniqueName: \"kubernetes.io/projected/7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f-kube-api-access-7lq4t\") pod \"nmstate-console-plugin-6ff7998486-rzgk7\" (UID: \"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f\") " pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.344832 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-87bbbf7b6-jf5qc"] Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.345091 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.345727 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.362795 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-87bbbf7b6-jf5qc"] Dec 11 10:04:58 crc kubenswrapper[4746]: W1211 10:04:58.398393 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54a843e2_1db9_49db_89e5_5254b7b50bab.slice/crio-9c4f34bf81aac7bce868a44ccc53c40c9bb67d6044f53199cdf4cbeb0c4005e8 WatchSource:0}: Error finding container 9c4f34bf81aac7bce868a44ccc53c40c9bb67d6044f53199cdf4cbeb0c4005e8: Status 404 returned error can't find the container with id 9c4f34bf81aac7bce868a44ccc53c40c9bb67d6044f53199cdf4cbeb0c4005e8 Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.411600 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3eebba-9306-40ec-b0be-d2ab06c6615f-console-serving-cert\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.411643 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-console-config\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.411675 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-oauth-serving-cert\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.411707 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c3eebba-9306-40ec-b0be-d2ab06c6615f-console-oauth-config\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.411744 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-service-ca\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.411764 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-trusted-ca-bundle\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.411808 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnvrz\" (UniqueName: \"kubernetes.io/projected/9c3eebba-9306-40ec-b0be-d2ab06c6615f-kube-api-access-wnvrz\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.460201 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.512795 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-trusted-ca-bundle\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.512840 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnvrz\" (UniqueName: \"kubernetes.io/projected/9c3eebba-9306-40ec-b0be-d2ab06c6615f-kube-api-access-wnvrz\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.512886 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3eebba-9306-40ec-b0be-d2ab06c6615f-console-serving-cert\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.512909 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-console-config\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.512942 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-oauth-serving-cert\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.512993 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c3eebba-9306-40ec-b0be-d2ab06c6615f-console-oauth-config\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.513027 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-service-ca\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.514257 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-service-ca\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.514422 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-oauth-serving-cert\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.514573 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-console-config\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.515717 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c3eebba-9306-40ec-b0be-d2ab06c6615f-trusted-ca-bundle\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.517652 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/9c3eebba-9306-40ec-b0be-d2ab06c6615f-console-oauth-config\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.517652 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/9c3eebba-9306-40ec-b0be-d2ab06c6615f-console-serving-cert\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.533393 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnvrz\" (UniqueName: \"kubernetes.io/projected/9c3eebba-9306-40ec-b0be-d2ab06c6615f-kube-api-access-wnvrz\") pod \"console-87bbbf7b6-jf5qc\" (UID: \"9c3eebba-9306-40ec-b0be-d2ab06c6615f\") " pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.547788 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm"] Dec 11 10:04:58 crc kubenswrapper[4746]: W1211 10:04:58.567401 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda911ba40_1cb3_4447_8f86_b03341052ae8.slice/crio-d25e0e8c3bc727a0a75857304cec29afef2b13b21687f1d299d65f9b1fad45e6 WatchSource:0}: Error finding container d25e0e8c3bc727a0a75857304cec29afef2b13b21687f1d299d65f9b1fad45e6: Status 404 returned error can't find the container with id d25e0e8c3bc727a0a75857304cec29afef2b13b21687f1d299d65f9b1fad45e6 Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.666406 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7"] Dec 11 10:04:58 crc kubenswrapper[4746]: W1211 10:04:58.671193 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7aefab1a_981b_48f1_bd2d_4f9f9f5cd49f.slice/crio-72745f93d5d4a5bb11e10f2a5701cb99b77a29bda069c145ca683f71026ddf9c WatchSource:0}: Error finding container 72745f93d5d4a5bb11e10f2a5701cb99b77a29bda069c145ca683f71026ddf9c: Status 404 returned error can't find the container with id 72745f93d5d4a5bb11e10f2a5701cb99b77a29bda069c145ca683f71026ddf9c Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.695687 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.715517 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9b209fd0-9f8c-4608-99df-7c691450b004-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-vt7v4\" (UID: \"9b209fd0-9f8c-4608-99df-7c691450b004\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.721676 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/9b209fd0-9f8c-4608-99df-7c691450b004-tls-key-pair\") pod \"nmstate-webhook-f8fb84555-vt7v4\" (UID: \"9b209fd0-9f8c-4608-99df-7c691450b004\") " pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.898960 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-87bbbf7b6-jf5qc"] Dec 11 10:04:58 crc kubenswrapper[4746]: W1211 10:04:58.902411 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c3eebba_9306_40ec_b0be_d2ab06c6615f.slice/crio-377cd219e4b1ce4f0139c67d308637f7dfa165c09d76e9ad9ce2d76f960c6ecb WatchSource:0}: Error finding container 377cd219e4b1ce4f0139c67d308637f7dfa165c09d76e9ad9ce2d76f960c6ecb: Status 404 returned error can't find the container with id 377cd219e4b1ce4f0139c67d308637f7dfa165c09d76e9ad9ce2d76f960c6ecb Dec 11 10:04:58 crc kubenswrapper[4746]: I1211 10:04:58.917686 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:04:59 crc kubenswrapper[4746]: I1211 10:04:59.053020 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm" event={"ID":"a911ba40-1cb3-4447-8f86-b03341052ae8","Type":"ContainerStarted","Data":"d25e0e8c3bc727a0a75857304cec29afef2b13b21687f1d299d65f9b1fad45e6"} Dec 11 10:04:59 crc kubenswrapper[4746]: I1211 10:04:59.054591 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" event={"ID":"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f","Type":"ContainerStarted","Data":"72745f93d5d4a5bb11e10f2a5701cb99b77a29bda069c145ca683f71026ddf9c"} Dec 11 10:04:59 crc kubenswrapper[4746]: I1211 10:04:59.055547 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-87bbbf7b6-jf5qc" event={"ID":"9c3eebba-9306-40ec-b0be-d2ab06c6615f","Type":"ContainerStarted","Data":"377cd219e4b1ce4f0139c67d308637f7dfa165c09d76e9ad9ce2d76f960c6ecb"} Dec 11 10:04:59 crc kubenswrapper[4746]: I1211 10:04:59.056244 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9kf4z" event={"ID":"54a843e2-1db9-49db-89e5-5254b7b50bab","Type":"ContainerStarted","Data":"9c4f34bf81aac7bce868a44ccc53c40c9bb67d6044f53199cdf4cbeb0c4005e8"} Dec 11 10:04:59 crc kubenswrapper[4746]: I1211 10:04:59.117463 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4"] Dec 11 10:05:00 crc kubenswrapper[4746]: I1211 10:05:00.065203 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-87bbbf7b6-jf5qc" event={"ID":"9c3eebba-9306-40ec-b0be-d2ab06c6615f","Type":"ContainerStarted","Data":"bf9b3bffca75275fc62d9eee9fb83ed468f54c80103f93c292dc9c8b647eeea8"} Dec 11 10:05:00 crc kubenswrapper[4746]: I1211 10:05:00.067229 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" event={"ID":"9b209fd0-9f8c-4608-99df-7c691450b004","Type":"ContainerStarted","Data":"b7420c4f25c72a1f9c7482cc4b2cb146b30bd66f5931a80b20f4f4ee73f65b08"} Dec 11 10:05:00 crc kubenswrapper[4746]: I1211 10:05:00.086448 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-87bbbf7b6-jf5qc" podStartSLOduration=2.08642921 podStartE2EDuration="2.08642921s" podCreationTimestamp="2025-12-11 10:04:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:05:00.083015037 +0000 UTC m=+672.942878350" watchObservedRunningTime="2025-12-11 10:05:00.08642921 +0000 UTC m=+672.946292523" Dec 11 10:05:02 crc kubenswrapper[4746]: I1211 10:05:02.081101 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" event={"ID":"7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f","Type":"ContainerStarted","Data":"b4222b49798f9c5e6d5dba806ae02543e9d14002bed41b08f6e97437baf23af2"} Dec 11 10:05:02 crc kubenswrapper[4746]: I1211 10:05:02.084840 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" event={"ID":"9b209fd0-9f8c-4608-99df-7c691450b004","Type":"ContainerStarted","Data":"48500fc7c5c060e413915440dbe8dd7fb46b07450106f2af835d0b46a87a231f"} Dec 11 10:05:02 crc kubenswrapper[4746]: I1211 10:05:02.085030 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:05:02 crc kubenswrapper[4746]: I1211 10:05:02.086614 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm" event={"ID":"a911ba40-1cb3-4447-8f86-b03341052ae8","Type":"ContainerStarted","Data":"8d3efa23fa8785be9ae2e3569320daeca4599672567f72ce228632d9723e1e46"} Dec 11 10:05:02 crc kubenswrapper[4746]: I1211 10:05:02.101698 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6ff7998486-rzgk7" podStartSLOduration=0.909242243 podStartE2EDuration="4.10167911s" podCreationTimestamp="2025-12-11 10:04:58 +0000 UTC" firstStartedPulling="2025-12-11 10:04:58.673155616 +0000 UTC m=+671.533018929" lastFinishedPulling="2025-12-11 10:05:01.865592483 +0000 UTC m=+674.725455796" observedRunningTime="2025-12-11 10:05:02.096329525 +0000 UTC m=+674.956192838" watchObservedRunningTime="2025-12-11 10:05:02.10167911 +0000 UTC m=+674.961542423" Dec 11 10:05:02 crc kubenswrapper[4746]: I1211 10:05:02.113128 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" podStartSLOduration=2.363246806 podStartE2EDuration="5.113109748s" podCreationTimestamp="2025-12-11 10:04:57 +0000 UTC" firstStartedPulling="2025-12-11 10:04:59.124985261 +0000 UTC m=+671.984848574" lastFinishedPulling="2025-12-11 10:05:01.874848203 +0000 UTC m=+674.734711516" observedRunningTime="2025-12-11 10:05:02.11097955 +0000 UTC m=+674.970842863" watchObservedRunningTime="2025-12-11 10:05:02.113109748 +0000 UTC m=+674.972973061" Dec 11 10:05:03 crc kubenswrapper[4746]: I1211 10:05:03.093342 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9kf4z" event={"ID":"54a843e2-1db9-49db-89e5-5254b7b50bab","Type":"ContainerStarted","Data":"f00913738dfc2adbcd402e099c377532e7fec879032cf5ba872a63089cd4d589"} Dec 11 10:05:03 crc kubenswrapper[4746]: I1211 10:05:03.119342 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9kf4z" podStartSLOduration=2.657956376 podStartE2EDuration="6.119318176s" podCreationTimestamp="2025-12-11 10:04:57 +0000 UTC" firstStartedPulling="2025-12-11 10:04:58.404592052 +0000 UTC m=+671.264455355" lastFinishedPulling="2025-12-11 10:05:01.865953842 +0000 UTC m=+674.725817155" observedRunningTime="2025-12-11 10:05:03.113978852 +0000 UTC m=+675.973842175" watchObservedRunningTime="2025-12-11 10:05:03.119318176 +0000 UTC m=+675.979181489" Dec 11 10:05:03 crc kubenswrapper[4746]: I1211 10:05:03.346186 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:05:05 crc kubenswrapper[4746]: I1211 10:05:05.293740 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm" event={"ID":"a911ba40-1cb3-4447-8f86-b03341052ae8","Type":"ContainerStarted","Data":"e0b6e05122277ca2f5455baebece21b2d7cc041a4876d719facfd8472ebd4850"} Dec 11 10:05:05 crc kubenswrapper[4746]: I1211 10:05:05.316296 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f7f7578db-2sdgm" podStartSLOduration=2.648882542 podStartE2EDuration="8.316270577s" podCreationTimestamp="2025-12-11 10:04:57 +0000 UTC" firstStartedPulling="2025-12-11 10:04:58.570753371 +0000 UTC m=+671.430616684" lastFinishedPulling="2025-12-11 10:05:04.238141406 +0000 UTC m=+677.098004719" observedRunningTime="2025-12-11 10:05:05.312602147 +0000 UTC m=+678.172465460" watchObservedRunningTime="2025-12-11 10:05:05.316270577 +0000 UTC m=+678.176133900" Dec 11 10:05:08 crc kubenswrapper[4746]: I1211 10:05:08.372996 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9kf4z" Dec 11 10:05:08 crc kubenswrapper[4746]: I1211 10:05:08.696091 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:05:08 crc kubenswrapper[4746]: I1211 10:05:08.696178 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:05:08 crc kubenswrapper[4746]: I1211 10:05:08.700380 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:05:09 crc kubenswrapper[4746]: I1211 10:05:09.319407 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-87bbbf7b6-jf5qc" Dec 11 10:05:09 crc kubenswrapper[4746]: I1211 10:05:09.388910 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4n677"] Dec 11 10:05:18 crc kubenswrapper[4746]: I1211 10:05:18.926095 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-f8fb84555-vt7v4" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.450869 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2"] Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.453073 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.462759 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.471494 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2"] Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.603229 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmfzt\" (UniqueName: \"kubernetes.io/projected/6a0f0228-471c-45fd-9197-241b2ba3c70a-kube-api-access-qmfzt\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.603324 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.603715 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.705766 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.705914 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmfzt\" (UniqueName: \"kubernetes.io/projected/6a0f0228-471c-45fd-9197-241b2ba3c70a-kube-api-access-qmfzt\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.706018 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.706945 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-util\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.706983 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-bundle\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.728155 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmfzt\" (UniqueName: \"kubernetes.io/projected/6a0f0228-471c-45fd-9197-241b2ba3c70a-kube-api-access-qmfzt\") pod \"5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:33 crc kubenswrapper[4746]: I1211 10:05:33.785089 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:34 crc kubenswrapper[4746]: I1211 10:05:34.232094 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2"] Dec 11 10:05:34 crc kubenswrapper[4746]: I1211 10:05:34.441368 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-4n677" podUID="2d6e68f4-a35b-43d1-b1fb-95600add4933" containerName="console" containerID="cri-o://de013b745135eaaea2b7ece20fe353bdc01d6305907e4afb794ed63e4650d8a7" gracePeriod=15 Dec 11 10:05:34 crc kubenswrapper[4746]: I1211 10:05:34.492357 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" event={"ID":"6a0f0228-471c-45fd-9197-241b2ba3c70a","Type":"ContainerStarted","Data":"ab687e36344a90b4d94a895ac3c113061854c12c772f80ca11b0c3ff3dacf5d7"} Dec 11 10:05:35 crc kubenswrapper[4746]: I1211 10:05:35.500450 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4n677_2d6e68f4-a35b-43d1-b1fb-95600add4933/console/0.log" Dec 11 10:05:35 crc kubenswrapper[4746]: I1211 10:05:35.500920 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4n677" event={"ID":"2d6e68f4-a35b-43d1-b1fb-95600add4933","Type":"ContainerDied","Data":"de013b745135eaaea2b7ece20fe353bdc01d6305907e4afb794ed63e4650d8a7"} Dec 11 10:05:35 crc kubenswrapper[4746]: I1211 10:05:35.501021 4746 generic.go:334] "Generic (PLEG): container finished" podID="2d6e68f4-a35b-43d1-b1fb-95600add4933" containerID="de013b745135eaaea2b7ece20fe353bdc01d6305907e4afb794ed63e4650d8a7" exitCode=2 Dec 11 10:05:35 crc kubenswrapper[4746]: I1211 10:05:35.936018 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4n677_2d6e68f4-a35b-43d1-b1fb-95600add4933/console/0.log" Dec 11 10:05:35 crc kubenswrapper[4746]: I1211 10:05:35.936138 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4n677" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.042261 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-trusted-ca-bundle\") pod \"2d6e68f4-a35b-43d1-b1fb-95600add4933\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.042339 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-config\") pod \"2d6e68f4-a35b-43d1-b1fb-95600add4933\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.042372 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-oauth-config\") pod \"2d6e68f4-a35b-43d1-b1fb-95600add4933\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.042408 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-oauth-serving-cert\") pod \"2d6e68f4-a35b-43d1-b1fb-95600add4933\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.042451 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-service-ca\") pod \"2d6e68f4-a35b-43d1-b1fb-95600add4933\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.042474 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-serving-cert\") pod \"2d6e68f4-a35b-43d1-b1fb-95600add4933\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.042504 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgq54\" (UniqueName: \"kubernetes.io/projected/2d6e68f4-a35b-43d1-b1fb-95600add4933-kube-api-access-rgq54\") pod \"2d6e68f4-a35b-43d1-b1fb-95600add4933\" (UID: \"2d6e68f4-a35b-43d1-b1fb-95600add4933\") " Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.043373 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "2d6e68f4-a35b-43d1-b1fb-95600add4933" (UID: "2d6e68f4-a35b-43d1-b1fb-95600add4933"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.043464 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "2d6e68f4-a35b-43d1-b1fb-95600add4933" (UID: "2d6e68f4-a35b-43d1-b1fb-95600add4933"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.043460 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-service-ca" (OuterVolumeSpecName: "service-ca") pod "2d6e68f4-a35b-43d1-b1fb-95600add4933" (UID: "2d6e68f4-a35b-43d1-b1fb-95600add4933"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.043907 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-config" (OuterVolumeSpecName: "console-config") pod "2d6e68f4-a35b-43d1-b1fb-95600add4933" (UID: "2d6e68f4-a35b-43d1-b1fb-95600add4933"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.044007 4746 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.044029 4746 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.044058 4746 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.044077 4746 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d6e68f4-a35b-43d1-b1fb-95600add4933-service-ca\") on node \"crc\" DevicePath \"\"" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.048425 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "2d6e68f4-a35b-43d1-b1fb-95600add4933" (UID: "2d6e68f4-a35b-43d1-b1fb-95600add4933"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.049100 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "2d6e68f4-a35b-43d1-b1fb-95600add4933" (UID: "2d6e68f4-a35b-43d1-b1fb-95600add4933"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.049371 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d6e68f4-a35b-43d1-b1fb-95600add4933-kube-api-access-rgq54" (OuterVolumeSpecName: "kube-api-access-rgq54") pod "2d6e68f4-a35b-43d1-b1fb-95600add4933" (UID: "2d6e68f4-a35b-43d1-b1fb-95600add4933"). InnerVolumeSpecName "kube-api-access-rgq54". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.145681 4746 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.145730 4746 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/2d6e68f4-a35b-43d1-b1fb-95600add4933-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.145741 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgq54\" (UniqueName: \"kubernetes.io/projected/2d6e68f4-a35b-43d1-b1fb-95600add4933-kube-api-access-rgq54\") on node \"crc\" DevicePath \"\"" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.512400 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-4n677_2d6e68f4-a35b-43d1-b1fb-95600add4933/console/0.log" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.512906 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-4n677" event={"ID":"2d6e68f4-a35b-43d1-b1fb-95600add4933","Type":"ContainerDied","Data":"aa7efdda5bafdc0e004dd0119ad8863d9e3f399fb3020ed679a0364b61dedc27"} Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.512950 4746 scope.go:117] "RemoveContainer" containerID="de013b745135eaaea2b7ece20fe353bdc01d6305907e4afb794ed63e4650d8a7" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.512988 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-4n677" Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.515716 4746 generic.go:334] "Generic (PLEG): container finished" podID="6a0f0228-471c-45fd-9197-241b2ba3c70a" containerID="1a1665aeeb1b1cdc2be48e0f1847c7e561c7705cd586781dc617fd9ecfffa4d9" exitCode=0 Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.515757 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" event={"ID":"6a0f0228-471c-45fd-9197-241b2ba3c70a","Type":"ContainerDied","Data":"1a1665aeeb1b1cdc2be48e0f1847c7e561c7705cd586781dc617fd9ecfffa4d9"} Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.565775 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-4n677"] Dec 11 10:05:36 crc kubenswrapper[4746]: I1211 10:05:36.571145 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-4n677"] Dec 11 10:05:37 crc kubenswrapper[4746]: I1211 10:05:37.640518 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d6e68f4-a35b-43d1-b1fb-95600add4933" path="/var/lib/kubelet/pods/2d6e68f4-a35b-43d1-b1fb-95600add4933/volumes" Dec 11 10:05:39 crc kubenswrapper[4746]: I1211 10:05:39.543679 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" event={"ID":"6a0f0228-471c-45fd-9197-241b2ba3c70a","Type":"ContainerStarted","Data":"7ef2586f2cd1e641a6fb078ca9a4566eeb3a8e626bb67ed7bd23f1695d1f830f"} Dec 11 10:05:40 crc kubenswrapper[4746]: I1211 10:05:40.559625 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" event={"ID":"6a0f0228-471c-45fd-9197-241b2ba3c70a","Type":"ContainerDied","Data":"7ef2586f2cd1e641a6fb078ca9a4566eeb3a8e626bb67ed7bd23f1695d1f830f"} Dec 11 10:05:40 crc kubenswrapper[4746]: I1211 10:05:40.559644 4746 generic.go:334] "Generic (PLEG): container finished" podID="6a0f0228-471c-45fd-9197-241b2ba3c70a" containerID="7ef2586f2cd1e641a6fb078ca9a4566eeb3a8e626bb67ed7bd23f1695d1f830f" exitCode=0 Dec 11 10:05:41 crc kubenswrapper[4746]: I1211 10:05:41.568861 4746 generic.go:334] "Generic (PLEG): container finished" podID="6a0f0228-471c-45fd-9197-241b2ba3c70a" containerID="2455becbbb008cbd6e994c673b951b6ec14b6a004ea4f750b814ad5e646a80aa" exitCode=0 Dec 11 10:05:41 crc kubenswrapper[4746]: I1211 10:05:41.568917 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" event={"ID":"6a0f0228-471c-45fd-9197-241b2ba3c70a","Type":"ContainerDied","Data":"2455becbbb008cbd6e994c673b951b6ec14b6a004ea4f750b814ad5e646a80aa"} Dec 11 10:05:42 crc kubenswrapper[4746]: I1211 10:05:42.803538 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:42 crc kubenswrapper[4746]: I1211 10:05:42.936677 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-bundle\") pod \"6a0f0228-471c-45fd-9197-241b2ba3c70a\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " Dec 11 10:05:42 crc kubenswrapper[4746]: I1211 10:05:42.936749 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmfzt\" (UniqueName: \"kubernetes.io/projected/6a0f0228-471c-45fd-9197-241b2ba3c70a-kube-api-access-qmfzt\") pod \"6a0f0228-471c-45fd-9197-241b2ba3c70a\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " Dec 11 10:05:42 crc kubenswrapper[4746]: I1211 10:05:42.936777 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-util\") pod \"6a0f0228-471c-45fd-9197-241b2ba3c70a\" (UID: \"6a0f0228-471c-45fd-9197-241b2ba3c70a\") " Dec 11 10:05:42 crc kubenswrapper[4746]: I1211 10:05:42.937731 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-bundle" (OuterVolumeSpecName: "bundle") pod "6a0f0228-471c-45fd-9197-241b2ba3c70a" (UID: "6a0f0228-471c-45fd-9197-241b2ba3c70a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:05:42 crc kubenswrapper[4746]: I1211 10:05:42.944211 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a0f0228-471c-45fd-9197-241b2ba3c70a-kube-api-access-qmfzt" (OuterVolumeSpecName: "kube-api-access-qmfzt") pod "6a0f0228-471c-45fd-9197-241b2ba3c70a" (UID: "6a0f0228-471c-45fd-9197-241b2ba3c70a"). InnerVolumeSpecName "kube-api-access-qmfzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:05:42 crc kubenswrapper[4746]: I1211 10:05:42.946778 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-util" (OuterVolumeSpecName: "util") pod "6a0f0228-471c-45fd-9197-241b2ba3c70a" (UID: "6a0f0228-471c-45fd-9197-241b2ba3c70a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:05:43 crc kubenswrapper[4746]: I1211 10:05:43.038279 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:05:43 crc kubenswrapper[4746]: I1211 10:05:43.038346 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmfzt\" (UniqueName: \"kubernetes.io/projected/6a0f0228-471c-45fd-9197-241b2ba3c70a-kube-api-access-qmfzt\") on node \"crc\" DevicePath \"\"" Dec 11 10:05:43 crc kubenswrapper[4746]: I1211 10:05:43.038373 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6a0f0228-471c-45fd-9197-241b2ba3c70a-util\") on node \"crc\" DevicePath \"\"" Dec 11 10:05:43 crc kubenswrapper[4746]: I1211 10:05:43.582602 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" event={"ID":"6a0f0228-471c-45fd-9197-241b2ba3c70a","Type":"ContainerDied","Data":"ab687e36344a90b4d94a895ac3c113061854c12c772f80ca11b0c3ff3dacf5d7"} Dec 11 10:05:43 crc kubenswrapper[4746]: I1211 10:05:43.582645 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab687e36344a90b4d94a895ac3c113061854c12c772f80ca11b0c3ff3dacf5d7" Dec 11 10:05:43 crc kubenswrapper[4746]: I1211 10:05:43.582707 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.645092 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg"] Dec 11 10:05:57 crc kubenswrapper[4746]: E1211 10:05:57.645789 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0f0228-471c-45fd-9197-241b2ba3c70a" containerName="util" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.645801 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0f0228-471c-45fd-9197-241b2ba3c70a" containerName="util" Dec 11 10:05:57 crc kubenswrapper[4746]: E1211 10:05:57.645814 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0f0228-471c-45fd-9197-241b2ba3c70a" containerName="pull" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.645820 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0f0228-471c-45fd-9197-241b2ba3c70a" containerName="pull" Dec 11 10:05:57 crc kubenswrapper[4746]: E1211 10:05:57.645830 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a0f0228-471c-45fd-9197-241b2ba3c70a" containerName="extract" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.645838 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a0f0228-471c-45fd-9197-241b2ba3c70a" containerName="extract" Dec 11 10:05:57 crc kubenswrapper[4746]: E1211 10:05:57.645847 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d6e68f4-a35b-43d1-b1fb-95600add4933" containerName="console" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.645853 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d6e68f4-a35b-43d1-b1fb-95600add4933" containerName="console" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.645942 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d6e68f4-a35b-43d1-b1fb-95600add4933" containerName="console" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.645960 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a0f0228-471c-45fd-9197-241b2ba3c70a" containerName="extract" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.646341 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.649068 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.649416 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-qscbb" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.649764 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.650000 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.651777 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.670955 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg"] Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.736935 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5803af05-a3ac-403a-88f6-4b7fb21678d0-webhook-cert\") pod \"metallb-operator-controller-manager-675c7b7dd8-6mxrg\" (UID: \"5803af05-a3ac-403a-88f6-4b7fb21678d0\") " pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.737026 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvdkn\" (UniqueName: \"kubernetes.io/projected/5803af05-a3ac-403a-88f6-4b7fb21678d0-kube-api-access-dvdkn\") pod \"metallb-operator-controller-manager-675c7b7dd8-6mxrg\" (UID: \"5803af05-a3ac-403a-88f6-4b7fb21678d0\") " pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.737090 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5803af05-a3ac-403a-88f6-4b7fb21678d0-apiservice-cert\") pod \"metallb-operator-controller-manager-675c7b7dd8-6mxrg\" (UID: \"5803af05-a3ac-403a-88f6-4b7fb21678d0\") " pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.838215 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5803af05-a3ac-403a-88f6-4b7fb21678d0-apiservice-cert\") pod \"metallb-operator-controller-manager-675c7b7dd8-6mxrg\" (UID: \"5803af05-a3ac-403a-88f6-4b7fb21678d0\") " pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.838317 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5803af05-a3ac-403a-88f6-4b7fb21678d0-webhook-cert\") pod \"metallb-operator-controller-manager-675c7b7dd8-6mxrg\" (UID: \"5803af05-a3ac-403a-88f6-4b7fb21678d0\") " pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.838411 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvdkn\" (UniqueName: \"kubernetes.io/projected/5803af05-a3ac-403a-88f6-4b7fb21678d0-kube-api-access-dvdkn\") pod \"metallb-operator-controller-manager-675c7b7dd8-6mxrg\" (UID: \"5803af05-a3ac-403a-88f6-4b7fb21678d0\") " pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.844620 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5803af05-a3ac-403a-88f6-4b7fb21678d0-apiservice-cert\") pod \"metallb-operator-controller-manager-675c7b7dd8-6mxrg\" (UID: \"5803af05-a3ac-403a-88f6-4b7fb21678d0\") " pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.844862 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5803af05-a3ac-403a-88f6-4b7fb21678d0-webhook-cert\") pod \"metallb-operator-controller-manager-675c7b7dd8-6mxrg\" (UID: \"5803af05-a3ac-403a-88f6-4b7fb21678d0\") " pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.855367 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvdkn\" (UniqueName: \"kubernetes.io/projected/5803af05-a3ac-403a-88f6-4b7fb21678d0-kube-api-access-dvdkn\") pod \"metallb-operator-controller-manager-675c7b7dd8-6mxrg\" (UID: \"5803af05-a3ac-403a-88f6-4b7fb21678d0\") " pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.960577 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.967763 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm"] Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.968613 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.970705 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.970919 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.971831 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-2m8wf" Dec 11 10:05:57 crc kubenswrapper[4746]: I1211 10:05:57.982827 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm"] Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.143741 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b01faa1-3b2c-448d-8285-217c6dbacc16-webhook-cert\") pod \"metallb-operator-webhook-server-797f9db975-fpbhm\" (UID: \"5b01faa1-3b2c-448d-8285-217c6dbacc16\") " pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.144243 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b01faa1-3b2c-448d-8285-217c6dbacc16-apiservice-cert\") pod \"metallb-operator-webhook-server-797f9db975-fpbhm\" (UID: \"5b01faa1-3b2c-448d-8285-217c6dbacc16\") " pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.144359 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hq6j\" (UniqueName: \"kubernetes.io/projected/5b01faa1-3b2c-448d-8285-217c6dbacc16-kube-api-access-5hq6j\") pod \"metallb-operator-webhook-server-797f9db975-fpbhm\" (UID: \"5b01faa1-3b2c-448d-8285-217c6dbacc16\") " pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.246102 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b01faa1-3b2c-448d-8285-217c6dbacc16-webhook-cert\") pod \"metallb-operator-webhook-server-797f9db975-fpbhm\" (UID: \"5b01faa1-3b2c-448d-8285-217c6dbacc16\") " pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.246170 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b01faa1-3b2c-448d-8285-217c6dbacc16-apiservice-cert\") pod \"metallb-operator-webhook-server-797f9db975-fpbhm\" (UID: \"5b01faa1-3b2c-448d-8285-217c6dbacc16\") " pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.246253 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hq6j\" (UniqueName: \"kubernetes.io/projected/5b01faa1-3b2c-448d-8285-217c6dbacc16-kube-api-access-5hq6j\") pod \"metallb-operator-webhook-server-797f9db975-fpbhm\" (UID: \"5b01faa1-3b2c-448d-8285-217c6dbacc16\") " pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.255949 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5b01faa1-3b2c-448d-8285-217c6dbacc16-webhook-cert\") pod \"metallb-operator-webhook-server-797f9db975-fpbhm\" (UID: \"5b01faa1-3b2c-448d-8285-217c6dbacc16\") " pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.269725 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5b01faa1-3b2c-448d-8285-217c6dbacc16-apiservice-cert\") pod \"metallb-operator-webhook-server-797f9db975-fpbhm\" (UID: \"5b01faa1-3b2c-448d-8285-217c6dbacc16\") " pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.277524 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hq6j\" (UniqueName: \"kubernetes.io/projected/5b01faa1-3b2c-448d-8285-217c6dbacc16-kube-api-access-5hq6j\") pod \"metallb-operator-webhook-server-797f9db975-fpbhm\" (UID: \"5b01faa1-3b2c-448d-8285-217c6dbacc16\") " pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.358168 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.527188 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg"] Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.586612 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm"] Dec 11 10:05:58 crc kubenswrapper[4746]: W1211 10:05:58.590663 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b01faa1_3b2c_448d_8285_217c6dbacc16.slice/crio-9c0818e902ca10c777cfec8e90d897d4a90170a8e1b77c2a0d4623c1e3f66e85 WatchSource:0}: Error finding container 9c0818e902ca10c777cfec8e90d897d4a90170a8e1b77c2a0d4623c1e3f66e85: Status 404 returned error can't find the container with id 9c0818e902ca10c777cfec8e90d897d4a90170a8e1b77c2a0d4623c1e3f66e85 Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.670126 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" event={"ID":"5803af05-a3ac-403a-88f6-4b7fb21678d0","Type":"ContainerStarted","Data":"573abdb357e51e37cd883747168c0ec39d6d30a3e1be8cdc6d960c855f0afce4"} Dec 11 10:05:58 crc kubenswrapper[4746]: I1211 10:05:58.671414 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" event={"ID":"5b01faa1-3b2c-448d-8285-217c6dbacc16","Type":"ContainerStarted","Data":"9c0818e902ca10c777cfec8e90d897d4a90170a8e1b77c2a0d4623c1e3f66e85"} Dec 11 10:06:02 crc kubenswrapper[4746]: I1211 10:06:02.702492 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" event={"ID":"5803af05-a3ac-403a-88f6-4b7fb21678d0","Type":"ContainerStarted","Data":"dd35a9aef79c74a96877360f96a54fc7887ddfbc068482b0dbeb9fa1da6b8520"} Dec 11 10:06:02 crc kubenswrapper[4746]: I1211 10:06:02.703246 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:06:02 crc kubenswrapper[4746]: I1211 10:06:02.728590 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" podStartSLOduration=2.136517007 podStartE2EDuration="5.728569881s" podCreationTimestamp="2025-12-11 10:05:57 +0000 UTC" firstStartedPulling="2025-12-11 10:05:58.540194725 +0000 UTC m=+731.400058038" lastFinishedPulling="2025-12-11 10:06:02.132247609 +0000 UTC m=+734.992110912" observedRunningTime="2025-12-11 10:06:02.726698281 +0000 UTC m=+735.586561584" watchObservedRunningTime="2025-12-11 10:06:02.728569881 +0000 UTC m=+735.588433194" Dec 11 10:06:04 crc kubenswrapper[4746]: I1211 10:06:04.713880 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" event={"ID":"5b01faa1-3b2c-448d-8285-217c6dbacc16","Type":"ContainerStarted","Data":"6b8c204dd365deff6c10ccb39dfbc89542c8952faf094921a451e5db9d131f80"} Dec 11 10:06:04 crc kubenswrapper[4746]: I1211 10:06:04.714220 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:06:04 crc kubenswrapper[4746]: I1211 10:06:04.739380 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" podStartSLOduration=2.549205113 podStartE2EDuration="7.739366035s" podCreationTimestamp="2025-12-11 10:05:57 +0000 UTC" firstStartedPulling="2025-12-11 10:05:58.599072946 +0000 UTC m=+731.458936259" lastFinishedPulling="2025-12-11 10:06:03.789233868 +0000 UTC m=+736.649097181" observedRunningTime="2025-12-11 10:06:04.732318384 +0000 UTC m=+737.592181697" watchObservedRunningTime="2025-12-11 10:06:04.739366035 +0000 UTC m=+737.599229348" Dec 11 10:06:18 crc kubenswrapper[4746]: I1211 10:06:18.366846 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-797f9db975-fpbhm" Dec 11 10:06:37 crc kubenswrapper[4746]: I1211 10:06:37.965775 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-675c7b7dd8-6mxrg" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.725652 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-8bdwk"] Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.728355 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.730639 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.731267 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.731467 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vx8dn" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.747626 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz"] Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.748389 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.750590 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.777572 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz"] Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.856767 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-rqt4v"] Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.857872 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rqt4v" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.860660 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.860689 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.860738 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.863486 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-rng76" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.877355 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5bddd4b946-pfns9"] Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.878218 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.879858 4746 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.900706 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-pfns9"] Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901136 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5dkn\" (UniqueName: \"kubernetes.io/projected/a7040058-21f6-4b31-8369-5c8c471f9cf6-kube-api-access-h5dkn\") pod \"frr-k8s-webhook-server-7784b6fcf-bs5bz\" (UID: \"a7040058-21f6-4b31-8369-5c8c471f9cf6\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901204 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-metrics-certs\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901235 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jstnl\" (UniqueName: \"kubernetes.io/projected/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-kube-api-access-jstnl\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901300 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-metrics-certs\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901385 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-reloader\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901430 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/912d8133-522c-4e88-a253-bf9be07b4d13-metrics-certs\") pod \"controller-5bddd4b946-pfns9\" (UID: \"912d8133-522c-4e88-a253-bf9be07b4d13\") " pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901455 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk9pm\" (UniqueName: \"kubernetes.io/projected/912d8133-522c-4e88-a253-bf9be07b4d13-kube-api-access-sk9pm\") pod \"controller-5bddd4b946-pfns9\" (UID: \"912d8133-522c-4e88-a253-bf9be07b4d13\") " pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901474 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-metallb-excludel2\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901493 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-frr-startup\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901515 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-memberlist\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901592 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/912d8133-522c-4e88-a253-bf9be07b4d13-cert\") pod \"controller-5bddd4b946-pfns9\" (UID: \"912d8133-522c-4e88-a253-bf9be07b4d13\") " pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901641 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-frr-conf\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901693 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q56jj\" (UniqueName: \"kubernetes.io/projected/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-kube-api-access-q56jj\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901751 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-frr-sockets\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901778 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7040058-21f6-4b31-8369-5c8c471f9cf6-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-bs5bz\" (UID: \"a7040058-21f6-4b31-8369-5c8c471f9cf6\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.901865 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-metrics\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:38 crc kubenswrapper[4746]: I1211 10:06:38.938883 4746 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002501 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/912d8133-522c-4e88-a253-bf9be07b4d13-cert\") pod \"controller-5bddd4b946-pfns9\" (UID: \"912d8133-522c-4e88-a253-bf9be07b4d13\") " pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002560 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-frr-conf\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002584 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q56jj\" (UniqueName: \"kubernetes.io/projected/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-kube-api-access-q56jj\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002619 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-frr-sockets\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002647 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7040058-21f6-4b31-8369-5c8c471f9cf6-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-bs5bz\" (UID: \"a7040058-21f6-4b31-8369-5c8c471f9cf6\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002681 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-metrics\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002711 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5dkn\" (UniqueName: \"kubernetes.io/projected/a7040058-21f6-4b31-8369-5c8c471f9cf6-kube-api-access-h5dkn\") pod \"frr-k8s-webhook-server-7784b6fcf-bs5bz\" (UID: \"a7040058-21f6-4b31-8369-5c8c471f9cf6\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002735 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-metrics-certs\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002759 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jstnl\" (UniqueName: \"kubernetes.io/projected/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-kube-api-access-jstnl\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002784 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-metrics-certs\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002824 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-reloader\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002851 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/912d8133-522c-4e88-a253-bf9be07b4d13-metrics-certs\") pod \"controller-5bddd4b946-pfns9\" (UID: \"912d8133-522c-4e88-a253-bf9be07b4d13\") " pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002868 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk9pm\" (UniqueName: \"kubernetes.io/projected/912d8133-522c-4e88-a253-bf9be07b4d13-kube-api-access-sk9pm\") pod \"controller-5bddd4b946-pfns9\" (UID: \"912d8133-522c-4e88-a253-bf9be07b4d13\") " pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002883 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-metallb-excludel2\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002899 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-frr-startup\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.002914 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-memberlist\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:39 crc kubenswrapper[4746]: E1211 10:06:39.003057 4746 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 10:06:39 crc kubenswrapper[4746]: E1211 10:06:39.003109 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-memberlist podName:cc662df8-2d5c-41e8-919a-dc8d1f4d20d8 nodeName:}" failed. No retries permitted until 2025-12-11 10:06:39.503090907 +0000 UTC m=+772.362954220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-memberlist") pod "speaker-rqt4v" (UID: "cc662df8-2d5c-41e8-919a-dc8d1f4d20d8") : secret "metallb-memberlist" not found Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.003561 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-metrics\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.003561 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-frr-conf\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.003603 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-reloader\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.003618 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-frr-sockets\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: E1211 10:06:39.003809 4746 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 11 10:06:39 crc kubenswrapper[4746]: E1211 10:06:39.003958 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-metrics-certs podName:cc662df8-2d5c-41e8-919a-dc8d1f4d20d8 nodeName:}" failed. No retries permitted until 2025-12-11 10:06:39.50394294 +0000 UTC m=+772.363806253 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-metrics-certs") pod "speaker-rqt4v" (UID: "cc662df8-2d5c-41e8-919a-dc8d1f4d20d8") : secret "speaker-certs-secret" not found Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.004263 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-metallb-excludel2\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.004942 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-frr-startup\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.011622 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/912d8133-522c-4e88-a253-bf9be07b4d13-cert\") pod \"controller-5bddd4b946-pfns9\" (UID: \"912d8133-522c-4e88-a253-bf9be07b4d13\") " pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.011672 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/912d8133-522c-4e88-a253-bf9be07b4d13-metrics-certs\") pod \"controller-5bddd4b946-pfns9\" (UID: \"912d8133-522c-4e88-a253-bf9be07b4d13\") " pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.011950 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a7040058-21f6-4b31-8369-5c8c471f9cf6-cert\") pod \"frr-k8s-webhook-server-7784b6fcf-bs5bz\" (UID: \"a7040058-21f6-4b31-8369-5c8c471f9cf6\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.025385 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-metrics-certs\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.028065 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jstnl\" (UniqueName: \"kubernetes.io/projected/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-kube-api-access-jstnl\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.028646 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5dkn\" (UniqueName: \"kubernetes.io/projected/a7040058-21f6-4b31-8369-5c8c471f9cf6-kube-api-access-h5dkn\") pod \"frr-k8s-webhook-server-7784b6fcf-bs5bz\" (UID: \"a7040058-21f6-4b31-8369-5c8c471f9cf6\") " pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.036866 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk9pm\" (UniqueName: \"kubernetes.io/projected/912d8133-522c-4e88-a253-bf9be07b4d13-kube-api-access-sk9pm\") pod \"controller-5bddd4b946-pfns9\" (UID: \"912d8133-522c-4e88-a253-bf9be07b4d13\") " pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.064335 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.098610 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q56jj\" (UniqueName: \"kubernetes.io/projected/f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96-kube-api-access-q56jj\") pod \"frr-k8s-8bdwk\" (UID: \"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96\") " pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.191486 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.353256 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.501621 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5bddd4b946-pfns9"] Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.510788 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-metrics-certs\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.510997 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-memberlist\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:39 crc kubenswrapper[4746]: E1211 10:06:39.511120 4746 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 11 10:06:39 crc kubenswrapper[4746]: E1211 10:06:39.511184 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-memberlist podName:cc662df8-2d5c-41e8-919a-dc8d1f4d20d8 nodeName:}" failed. No retries permitted until 2025-12-11 10:06:40.511166133 +0000 UTC m=+773.371029446 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-memberlist") pod "speaker-rqt4v" (UID: "cc662df8-2d5c-41e8-919a-dc8d1f4d20d8") : secret "metallb-memberlist" not found Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.517570 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-metrics-certs\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.629175 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz"] Dec 11 10:06:39 crc kubenswrapper[4746]: W1211 10:06:39.636744 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7040058_21f6_4b31_8369_5c8c471f9cf6.slice/crio-61de11e483a5170fb4e838bf6e90a52cc680c23cb96e33df97da5cddd024baa3 WatchSource:0}: Error finding container 61de11e483a5170fb4e838bf6e90a52cc680c23cb96e33df97da5cddd024baa3: Status 404 returned error can't find the container with id 61de11e483a5170fb4e838bf6e90a52cc680c23cb96e33df97da5cddd024baa3 Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.924333 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-pfns9" event={"ID":"912d8133-522c-4e88-a253-bf9be07b4d13","Type":"ContainerStarted","Data":"e6d13b0c2a6a7c01ccb9a8a4d3e4db8c276facc4fec873a3e71960040c5eef30"} Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.925391 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.925552 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-pfns9" event={"ID":"912d8133-522c-4e88-a253-bf9be07b4d13","Type":"ContainerStarted","Data":"c9ce9f739021363a8617c9e6c1dc6ff792725dae50af6385be2a8e455282a11e"} Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.925676 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5bddd4b946-pfns9" event={"ID":"912d8133-522c-4e88-a253-bf9be07b4d13","Type":"ContainerStarted","Data":"f03a611c6b6c4faaeaa448811238a8546a0224f90b16c510683365c17c4d60f1"} Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.925862 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" event={"ID":"a7040058-21f6-4b31-8369-5c8c471f9cf6","Type":"ContainerStarted","Data":"61de11e483a5170fb4e838bf6e90a52cc680c23cb96e33df97da5cddd024baa3"} Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.926087 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8bdwk" event={"ID":"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96","Type":"ContainerStarted","Data":"4e3a8b070dbf8feca2f2e896d900ebd0c2b392792590f4264b3c8e1a70532949"} Dec 11 10:06:39 crc kubenswrapper[4746]: I1211 10:06:39.942608 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5bddd4b946-pfns9" podStartSLOduration=1.942590737 podStartE2EDuration="1.942590737s" podCreationTimestamp="2025-12-11 10:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:06:39.942122455 +0000 UTC m=+772.801985798" watchObservedRunningTime="2025-12-11 10:06:39.942590737 +0000 UTC m=+772.802454070" Dec 11 10:06:40 crc kubenswrapper[4746]: I1211 10:06:40.524464 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-memberlist\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:40 crc kubenswrapper[4746]: I1211 10:06:40.529309 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/cc662df8-2d5c-41e8-919a-dc8d1f4d20d8-memberlist\") pod \"speaker-rqt4v\" (UID: \"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8\") " pod="metallb-system/speaker-rqt4v" Dec 11 10:06:40 crc kubenswrapper[4746]: I1211 10:06:40.670834 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-rqt4v" Dec 11 10:06:40 crc kubenswrapper[4746]: I1211 10:06:40.934265 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rqt4v" event={"ID":"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8","Type":"ContainerStarted","Data":"e171a69f5024da3ceba1b9788dc5e2c422b5aa01e2917cc57ed773532c1af39e"} Dec 11 10:06:41 crc kubenswrapper[4746]: I1211 10:06:41.946082 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rqt4v" event={"ID":"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8","Type":"ContainerStarted","Data":"a2b7e66c09b23097b3162beaf55bff765b9ce377458fb30f17e54b4e0592847f"} Dec 11 10:06:41 crc kubenswrapper[4746]: I1211 10:06:41.946695 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-rqt4v" event={"ID":"cc662df8-2d5c-41e8-919a-dc8d1f4d20d8","Type":"ContainerStarted","Data":"ff2c15124467686a3a8612178e1e50b330eaa4e81f5a94f8a01d1431ffbe180d"} Dec 11 10:06:41 crc kubenswrapper[4746]: I1211 10:06:41.971350 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-rqt4v" podStartSLOduration=3.971332836 podStartE2EDuration="3.971332836s" podCreationTimestamp="2025-12-11 10:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:06:41.967953995 +0000 UTC m=+774.827817308" watchObservedRunningTime="2025-12-11 10:06:41.971332836 +0000 UTC m=+774.831196149" Dec 11 10:06:42 crc kubenswrapper[4746]: I1211 10:06:42.954523 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-rqt4v" Dec 11 10:06:47 crc kubenswrapper[4746]: E1211 10:06:47.491437 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7e4da6b_6da4_4ac3_87b4_ef4042a6fb96.slice/crio-d784cee1b43c7e7647aa95cc2d7de7af7832acf3dd0d3de2643a223401ac5e2b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7e4da6b_6da4_4ac3_87b4_ef4042a6fb96.slice/crio-conmon-d784cee1b43c7e7647aa95cc2d7de7af7832acf3dd0d3de2643a223401ac5e2b.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:06:47 crc kubenswrapper[4746]: I1211 10:06:47.997296 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" event={"ID":"a7040058-21f6-4b31-8369-5c8c471f9cf6","Type":"ContainerStarted","Data":"833d737e14d5d08985e31811a58c06e5d2ea0a9f809276471df196b10b465c5c"} Dec 11 10:06:47 crc kubenswrapper[4746]: I1211 10:06:47.997807 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" Dec 11 10:06:47 crc kubenswrapper[4746]: I1211 10:06:47.998809 4746 generic.go:334] "Generic (PLEG): container finished" podID="f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96" containerID="d784cee1b43c7e7647aa95cc2d7de7af7832acf3dd0d3de2643a223401ac5e2b" exitCode=0 Dec 11 10:06:47 crc kubenswrapper[4746]: I1211 10:06:47.998853 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8bdwk" event={"ID":"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96","Type":"ContainerDied","Data":"d784cee1b43c7e7647aa95cc2d7de7af7832acf3dd0d3de2643a223401ac5e2b"} Dec 11 10:06:48 crc kubenswrapper[4746]: I1211 10:06:48.021941 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" podStartSLOduration=2.4423666490000002 podStartE2EDuration="10.021921139s" podCreationTimestamp="2025-12-11 10:06:38 +0000 UTC" firstStartedPulling="2025-12-11 10:06:39.641569839 +0000 UTC m=+772.501433152" lastFinishedPulling="2025-12-11 10:06:47.221124329 +0000 UTC m=+780.080987642" observedRunningTime="2025-12-11 10:06:48.014952401 +0000 UTC m=+780.874815744" watchObservedRunningTime="2025-12-11 10:06:48.021921139 +0000 UTC m=+780.881784462" Dec 11 10:06:49 crc kubenswrapper[4746]: I1211 10:06:49.008472 4746 generic.go:334] "Generic (PLEG): container finished" podID="f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96" containerID="1f74fab8cb209a7ae5d042003a6c00e787b4db7829557e48c9f3733678aab391" exitCode=0 Dec 11 10:06:49 crc kubenswrapper[4746]: I1211 10:06:49.008590 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8bdwk" event={"ID":"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96","Type":"ContainerDied","Data":"1f74fab8cb209a7ae5d042003a6c00e787b4db7829557e48c9f3733678aab391"} Dec 11 10:06:49 crc kubenswrapper[4746]: I1211 10:06:49.195943 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5bddd4b946-pfns9" Dec 11 10:06:50 crc kubenswrapper[4746]: I1211 10:06:50.017349 4746 generic.go:334] "Generic (PLEG): container finished" podID="f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96" containerID="23fe2e993189430e71506fdf72b7df2771c849d65b4695e13498854a5f989b63" exitCode=0 Dec 11 10:06:50 crc kubenswrapper[4746]: I1211 10:06:50.017393 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8bdwk" event={"ID":"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96","Type":"ContainerDied","Data":"23fe2e993189430e71506fdf72b7df2771c849d65b4695e13498854a5f989b63"} Dec 11 10:06:50 crc kubenswrapper[4746]: I1211 10:06:50.673671 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-rqt4v" Dec 11 10:06:51 crc kubenswrapper[4746]: I1211 10:06:51.030917 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8bdwk" event={"ID":"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96","Type":"ContainerStarted","Data":"a583602159e7f60de3c5c3c21ee024fe21422ddd088d3585e3c787779f2667ea"} Dec 11 10:06:51 crc kubenswrapper[4746]: I1211 10:06:51.030990 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8bdwk" event={"ID":"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96","Type":"ContainerStarted","Data":"12d26fed9d43871e39655c507a097ef59f43688175281bfa2904b42a57bd21a9"} Dec 11 10:06:51 crc kubenswrapper[4746]: I1211 10:06:51.031008 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8bdwk" event={"ID":"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96","Type":"ContainerStarted","Data":"ce27c3147cbc0b8597578ea10541a149ecec4f3963c3d7c687663fd498e85e9f"} Dec 11 10:06:51 crc kubenswrapper[4746]: I1211 10:06:51.031021 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8bdwk" event={"ID":"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96","Type":"ContainerStarted","Data":"7c17f26f52923b99c2d320dd8961492253c4c26ee916411a62221cf1d1dac5e8"} Dec 11 10:06:51 crc kubenswrapper[4746]: I1211 10:06:51.031033 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8bdwk" event={"ID":"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96","Type":"ContainerStarted","Data":"c2681ebcc6e800e6e0189e6df226cc3559f001c22c14a6eee2c88a4367b54901"} Dec 11 10:06:52 crc kubenswrapper[4746]: I1211 10:06:52.044770 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-8bdwk" event={"ID":"f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96","Type":"ContainerStarted","Data":"cef67b3c5c92da9018628f47d3f0e3cb06f57b9a552c2074f5f28d7d3b1cc03b"} Dec 11 10:06:52 crc kubenswrapper[4746]: I1211 10:06:52.046179 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:52 crc kubenswrapper[4746]: I1211 10:06:52.081010 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-8bdwk" podStartSLOduration=6.325016841 podStartE2EDuration="14.080978161s" podCreationTimestamp="2025-12-11 10:06:38 +0000 UTC" firstStartedPulling="2025-12-11 10:06:39.482234111 +0000 UTC m=+772.342097424" lastFinishedPulling="2025-12-11 10:06:47.238195431 +0000 UTC m=+780.098058744" observedRunningTime="2025-12-11 10:06:52.07647533 +0000 UTC m=+784.936338673" watchObservedRunningTime="2025-12-11 10:06:52.080978161 +0000 UTC m=+784.940841494" Dec 11 10:06:53 crc kubenswrapper[4746]: I1211 10:06:53.801315 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-7nn96"] Dec 11 10:06:53 crc kubenswrapper[4746]: I1211 10:06:53.802361 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7nn96" Dec 11 10:06:53 crc kubenswrapper[4746]: I1211 10:06:53.803960 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-kt2v2" Dec 11 10:06:53 crc kubenswrapper[4746]: I1211 10:06:53.804491 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 11 10:06:53 crc kubenswrapper[4746]: I1211 10:06:53.805236 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55qdj\" (UniqueName: \"kubernetes.io/projected/712ae45d-eee2-41a7-9a2b-2bc89f5c34a2-kube-api-access-55qdj\") pod \"openstack-operator-index-7nn96\" (UID: \"712ae45d-eee2-41a7-9a2b-2bc89f5c34a2\") " pod="openstack-operators/openstack-operator-index-7nn96" Dec 11 10:06:53 crc kubenswrapper[4746]: I1211 10:06:53.805530 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 11 10:06:53 crc kubenswrapper[4746]: I1211 10:06:53.811806 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7nn96"] Dec 11 10:06:53 crc kubenswrapper[4746]: I1211 10:06:53.906561 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55qdj\" (UniqueName: \"kubernetes.io/projected/712ae45d-eee2-41a7-9a2b-2bc89f5c34a2-kube-api-access-55qdj\") pod \"openstack-operator-index-7nn96\" (UID: \"712ae45d-eee2-41a7-9a2b-2bc89f5c34a2\") " pod="openstack-operators/openstack-operator-index-7nn96" Dec 11 10:06:53 crc kubenswrapper[4746]: I1211 10:06:53.929580 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55qdj\" (UniqueName: \"kubernetes.io/projected/712ae45d-eee2-41a7-9a2b-2bc89f5c34a2-kube-api-access-55qdj\") pod \"openstack-operator-index-7nn96\" (UID: \"712ae45d-eee2-41a7-9a2b-2bc89f5c34a2\") " pod="openstack-operators/openstack-operator-index-7nn96" Dec 11 10:06:54 crc kubenswrapper[4746]: I1211 10:06:54.118581 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7nn96" Dec 11 10:06:54 crc kubenswrapper[4746]: I1211 10:06:54.354601 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:54 crc kubenswrapper[4746]: I1211 10:06:54.391449 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:06:54 crc kubenswrapper[4746]: I1211 10:06:54.544390 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-7nn96"] Dec 11 10:06:55 crc kubenswrapper[4746]: I1211 10:06:55.070803 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7nn96" event={"ID":"712ae45d-eee2-41a7-9a2b-2bc89f5c34a2","Type":"ContainerStarted","Data":"fc05af4f443291d36f21a8a8cb77c4e77a3b27a85612b55e70f93fa04bffb6a0"} Dec 11 10:06:57 crc kubenswrapper[4746]: I1211 10:06:57.380377 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7nn96"] Dec 11 10:06:57 crc kubenswrapper[4746]: I1211 10:06:57.987590 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hsbv7"] Dec 11 10:06:57 crc kubenswrapper[4746]: I1211 10:06:57.988638 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hsbv7" Dec 11 10:06:58 crc kubenswrapper[4746]: I1211 10:06:58.000186 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hsbv7"] Dec 11 10:06:58 crc kubenswrapper[4746]: I1211 10:06:58.063376 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcvl9\" (UniqueName: \"kubernetes.io/projected/f80e03b9-de97-4ae0-bbcd-edc0079f20f3-kube-api-access-kcvl9\") pod \"openstack-operator-index-hsbv7\" (UID: \"f80e03b9-de97-4ae0-bbcd-edc0079f20f3\") " pod="openstack-operators/openstack-operator-index-hsbv7" Dec 11 10:06:58 crc kubenswrapper[4746]: I1211 10:06:58.164307 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcvl9\" (UniqueName: \"kubernetes.io/projected/f80e03b9-de97-4ae0-bbcd-edc0079f20f3-kube-api-access-kcvl9\") pod \"openstack-operator-index-hsbv7\" (UID: \"f80e03b9-de97-4ae0-bbcd-edc0079f20f3\") " pod="openstack-operators/openstack-operator-index-hsbv7" Dec 11 10:06:58 crc kubenswrapper[4746]: I1211 10:06:58.185694 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcvl9\" (UniqueName: \"kubernetes.io/projected/f80e03b9-de97-4ae0-bbcd-edc0079f20f3-kube-api-access-kcvl9\") pod \"openstack-operator-index-hsbv7\" (UID: \"f80e03b9-de97-4ae0-bbcd-edc0079f20f3\") " pod="openstack-operators/openstack-operator-index-hsbv7" Dec 11 10:06:58 crc kubenswrapper[4746]: I1211 10:06:58.309831 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hsbv7" Dec 11 10:06:58 crc kubenswrapper[4746]: I1211 10:06:58.955207 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hsbv7"] Dec 11 10:06:58 crc kubenswrapper[4746]: W1211 10:06:58.960413 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf80e03b9_de97_4ae0_bbcd_edc0079f20f3.slice/crio-5b1bb7b98a5a49c40ec2242aaf5cb0b951aece703d36bea37875fe1e92f49fc2 WatchSource:0}: Error finding container 5b1bb7b98a5a49c40ec2242aaf5cb0b951aece703d36bea37875fe1e92f49fc2: Status 404 returned error can't find the container with id 5b1bb7b98a5a49c40ec2242aaf5cb0b951aece703d36bea37875fe1e92f49fc2 Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.069722 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7784b6fcf-bs5bz" Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.103279 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hsbv7" event={"ID":"f80e03b9-de97-4ae0-bbcd-edc0079f20f3","Type":"ContainerStarted","Data":"5b1bb7b98a5a49c40ec2242aaf5cb0b951aece703d36bea37875fe1e92f49fc2"} Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.105267 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7nn96" event={"ID":"712ae45d-eee2-41a7-9a2b-2bc89f5c34a2","Type":"ContainerStarted","Data":"9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af"} Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.105450 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-7nn96" podUID="712ae45d-eee2-41a7-9a2b-2bc89f5c34a2" containerName="registry-server" containerID="cri-o://9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af" gracePeriod=2 Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.130983 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-7nn96" podStartSLOduration=2.080870319 podStartE2EDuration="6.130959456s" podCreationTimestamp="2025-12-11 10:06:53 +0000 UTC" firstStartedPulling="2025-12-11 10:06:54.561737281 +0000 UTC m=+787.421600584" lastFinishedPulling="2025-12-11 10:06:58.611826408 +0000 UTC m=+791.471689721" observedRunningTime="2025-12-11 10:06:59.12109924 +0000 UTC m=+791.980962553" watchObservedRunningTime="2025-12-11 10:06:59.130959456 +0000 UTC m=+791.990822779" Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.454102 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7nn96" Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.585750 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55qdj\" (UniqueName: \"kubernetes.io/projected/712ae45d-eee2-41a7-9a2b-2bc89f5c34a2-kube-api-access-55qdj\") pod \"712ae45d-eee2-41a7-9a2b-2bc89f5c34a2\" (UID: \"712ae45d-eee2-41a7-9a2b-2bc89f5c34a2\") " Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.590900 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/712ae45d-eee2-41a7-9a2b-2bc89f5c34a2-kube-api-access-55qdj" (OuterVolumeSpecName: "kube-api-access-55qdj") pod "712ae45d-eee2-41a7-9a2b-2bc89f5c34a2" (UID: "712ae45d-eee2-41a7-9a2b-2bc89f5c34a2"). InnerVolumeSpecName "kube-api-access-55qdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.687834 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55qdj\" (UniqueName: \"kubernetes.io/projected/712ae45d-eee2-41a7-9a2b-2bc89f5c34a2-kube-api-access-55qdj\") on node \"crc\" DevicePath \"\"" Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.878556 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:06:59 crc kubenswrapper[4746]: I1211 10:06:59.879299 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.114160 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hsbv7" event={"ID":"f80e03b9-de97-4ae0-bbcd-edc0079f20f3","Type":"ContainerStarted","Data":"eab5d61aa62db55e1939c7ee2a9d2b7e8da88cf2850d1503e8d23c5e93fa32f7"} Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.115927 4746 generic.go:334] "Generic (PLEG): container finished" podID="712ae45d-eee2-41a7-9a2b-2bc89f5c34a2" containerID="9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af" exitCode=0 Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.115999 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7nn96" event={"ID":"712ae45d-eee2-41a7-9a2b-2bc89f5c34a2","Type":"ContainerDied","Data":"9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af"} Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.116027 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-7nn96" Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.116077 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-7nn96" event={"ID":"712ae45d-eee2-41a7-9a2b-2bc89f5c34a2","Type":"ContainerDied","Data":"fc05af4f443291d36f21a8a8cb77c4e77a3b27a85612b55e70f93fa04bffb6a0"} Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.116091 4746 scope.go:117] "RemoveContainer" containerID="9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af" Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.143525 4746 scope.go:117] "RemoveContainer" containerID="9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af" Dec 11 10:07:00 crc kubenswrapper[4746]: E1211 10:07:00.144251 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af\": container with ID starting with 9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af not found: ID does not exist" containerID="9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af" Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.144292 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af"} err="failed to get container status \"9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af\": rpc error: code = NotFound desc = could not find container \"9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af\": container with ID starting with 9432343fed1ed7db61f0b1ca221ac53b90473fd673315cd36c7a38b203e6c5af not found: ID does not exist" Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.145428 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hsbv7" podStartSLOduration=3.086682673 podStartE2EDuration="3.145397959s" podCreationTimestamp="2025-12-11 10:06:57 +0000 UTC" firstStartedPulling="2025-12-11 10:06:58.964870981 +0000 UTC m=+791.824734294" lastFinishedPulling="2025-12-11 10:06:59.023586267 +0000 UTC m=+791.883449580" observedRunningTime="2025-12-11 10:07:00.134520995 +0000 UTC m=+792.994384328" watchObservedRunningTime="2025-12-11 10:07:00.145397959 +0000 UTC m=+793.005261292" Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.160796 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-7nn96"] Dec 11 10:07:00 crc kubenswrapper[4746]: I1211 10:07:00.165541 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-7nn96"] Dec 11 10:07:01 crc kubenswrapper[4746]: I1211 10:07:01.639868 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="712ae45d-eee2-41a7-9a2b-2bc89f5c34a2" path="/var/lib/kubelet/pods/712ae45d-eee2-41a7-9a2b-2bc89f5c34a2/volumes" Dec 11 10:07:08 crc kubenswrapper[4746]: I1211 10:07:08.310458 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-hsbv7" Dec 11 10:07:08 crc kubenswrapper[4746]: I1211 10:07:08.310829 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-hsbv7" Dec 11 10:07:08 crc kubenswrapper[4746]: I1211 10:07:08.341721 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-hsbv7" Dec 11 10:07:09 crc kubenswrapper[4746]: I1211 10:07:09.201616 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-hsbv7" Dec 11 10:07:09 crc kubenswrapper[4746]: I1211 10:07:09.356612 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-8bdwk" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.229235 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm"] Dec 11 10:07:16 crc kubenswrapper[4746]: E1211 10:07:16.230403 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="712ae45d-eee2-41a7-9a2b-2bc89f5c34a2" containerName="registry-server" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.230420 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="712ae45d-eee2-41a7-9a2b-2bc89f5c34a2" containerName="registry-server" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.230578 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="712ae45d-eee2-41a7-9a2b-2bc89f5c34a2" containerName="registry-server" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.231813 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.234598 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-52pvj" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.243222 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm"] Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.416516 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-util\") pod \"5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.416614 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-bundle\") pod \"5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.417392 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbv98\" (UniqueName: \"kubernetes.io/projected/bdce6d6c-a4ea-49a8-bef4-da390b678b24-kube-api-access-cbv98\") pod \"5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.518985 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbv98\" (UniqueName: \"kubernetes.io/projected/bdce6d6c-a4ea-49a8-bef4-da390b678b24-kube-api-access-cbv98\") pod \"5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.519123 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-util\") pod \"5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.519180 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-bundle\") pod \"5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.519763 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-bundle\") pod \"5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.519970 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-util\") pod \"5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.548626 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbv98\" (UniqueName: \"kubernetes.io/projected/bdce6d6c-a4ea-49a8-bef4-da390b678b24-kube-api-access-cbv98\") pod \"5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.552087 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:16 crc kubenswrapper[4746]: I1211 10:07:16.802138 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm"] Dec 11 10:07:16 crc kubenswrapper[4746]: W1211 10:07:16.808035 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbdce6d6c_a4ea_49a8_bef4_da390b678b24.slice/crio-3b8ef9062dbc2c8ff111bd88975284739302883dd9353e4ce89afa44311da88e WatchSource:0}: Error finding container 3b8ef9062dbc2c8ff111bd88975284739302883dd9353e4ce89afa44311da88e: Status 404 returned error can't find the container with id 3b8ef9062dbc2c8ff111bd88975284739302883dd9353e4ce89afa44311da88e Dec 11 10:07:17 crc kubenswrapper[4746]: I1211 10:07:17.237212 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" event={"ID":"bdce6d6c-a4ea-49a8-bef4-da390b678b24","Type":"ContainerStarted","Data":"cc10aa618fed57b76a71e900d13f0eea8ea5aaae2247999b33c79da19765dec8"} Dec 11 10:07:17 crc kubenswrapper[4746]: I1211 10:07:17.237296 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" event={"ID":"bdce6d6c-a4ea-49a8-bef4-da390b678b24","Type":"ContainerStarted","Data":"3b8ef9062dbc2c8ff111bd88975284739302883dd9353e4ce89afa44311da88e"} Dec 11 10:07:18 crc kubenswrapper[4746]: I1211 10:07:18.248991 4746 generic.go:334] "Generic (PLEG): container finished" podID="bdce6d6c-a4ea-49a8-bef4-da390b678b24" containerID="cc10aa618fed57b76a71e900d13f0eea8ea5aaae2247999b33c79da19765dec8" exitCode=0 Dec 11 10:07:18 crc kubenswrapper[4746]: I1211 10:07:18.249079 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" event={"ID":"bdce6d6c-a4ea-49a8-bef4-da390b678b24","Type":"ContainerDied","Data":"cc10aa618fed57b76a71e900d13f0eea8ea5aaae2247999b33c79da19765dec8"} Dec 11 10:07:19 crc kubenswrapper[4746]: I1211 10:07:19.258209 4746 generic.go:334] "Generic (PLEG): container finished" podID="bdce6d6c-a4ea-49a8-bef4-da390b678b24" containerID="7758bd177461511606c0e00dc09dc559660e7f768bf4ca05058617b85b238501" exitCode=0 Dec 11 10:07:19 crc kubenswrapper[4746]: I1211 10:07:19.258319 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" event={"ID":"bdce6d6c-a4ea-49a8-bef4-da390b678b24","Type":"ContainerDied","Data":"7758bd177461511606c0e00dc09dc559660e7f768bf4ca05058617b85b238501"} Dec 11 10:07:20 crc kubenswrapper[4746]: I1211 10:07:20.268407 4746 generic.go:334] "Generic (PLEG): container finished" podID="bdce6d6c-a4ea-49a8-bef4-da390b678b24" containerID="89da08f6de84058304a867f5bfc78cab3144b0457924203d31d800287fc3c601" exitCode=0 Dec 11 10:07:20 crc kubenswrapper[4746]: I1211 10:07:20.268477 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" event={"ID":"bdce6d6c-a4ea-49a8-bef4-da390b678b24","Type":"ContainerDied","Data":"89da08f6de84058304a867f5bfc78cab3144b0457924203d31d800287fc3c601"} Dec 11 10:07:21 crc kubenswrapper[4746]: I1211 10:07:21.512156 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:21 crc kubenswrapper[4746]: I1211 10:07:21.691975 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-util\") pod \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " Dec 11 10:07:21 crc kubenswrapper[4746]: I1211 10:07:21.692125 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbv98\" (UniqueName: \"kubernetes.io/projected/bdce6d6c-a4ea-49a8-bef4-da390b678b24-kube-api-access-cbv98\") pod \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " Dec 11 10:07:21 crc kubenswrapper[4746]: I1211 10:07:21.692157 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-bundle\") pod \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\" (UID: \"bdce6d6c-a4ea-49a8-bef4-da390b678b24\") " Dec 11 10:07:21 crc kubenswrapper[4746]: I1211 10:07:21.692993 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-bundle" (OuterVolumeSpecName: "bundle") pod "bdce6d6c-a4ea-49a8-bef4-da390b678b24" (UID: "bdce6d6c-a4ea-49a8-bef4-da390b678b24"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:07:21 crc kubenswrapper[4746]: I1211 10:07:21.701197 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdce6d6c-a4ea-49a8-bef4-da390b678b24-kube-api-access-cbv98" (OuterVolumeSpecName: "kube-api-access-cbv98") pod "bdce6d6c-a4ea-49a8-bef4-da390b678b24" (UID: "bdce6d6c-a4ea-49a8-bef4-da390b678b24"). InnerVolumeSpecName "kube-api-access-cbv98". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:07:21 crc kubenswrapper[4746]: I1211 10:07:21.717082 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-util" (OuterVolumeSpecName: "util") pod "bdce6d6c-a4ea-49a8-bef4-da390b678b24" (UID: "bdce6d6c-a4ea-49a8-bef4-da390b678b24"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:07:21 crc kubenswrapper[4746]: I1211 10:07:21.794768 4746 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-util\") on node \"crc\" DevicePath \"\"" Dec 11 10:07:21 crc kubenswrapper[4746]: I1211 10:07:21.795248 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbv98\" (UniqueName: \"kubernetes.io/projected/bdce6d6c-a4ea-49a8-bef4-da390b678b24-kube-api-access-cbv98\") on node \"crc\" DevicePath \"\"" Dec 11 10:07:21 crc kubenswrapper[4746]: I1211 10:07:21.795317 4746 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bdce6d6c-a4ea-49a8-bef4-da390b678b24-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:07:22 crc kubenswrapper[4746]: I1211 10:07:22.284036 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" event={"ID":"bdce6d6c-a4ea-49a8-bef4-da390b678b24","Type":"ContainerDied","Data":"3b8ef9062dbc2c8ff111bd88975284739302883dd9353e4ce89afa44311da88e"} Dec 11 10:07:22 crc kubenswrapper[4746]: I1211 10:07:22.284343 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b8ef9062dbc2c8ff111bd88975284739302883dd9353e4ce89afa44311da88e" Dec 11 10:07:22 crc kubenswrapper[4746]: I1211 10:07:22.284113 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.421313 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h"] Dec 11 10:07:28 crc kubenswrapper[4746]: E1211 10:07:28.423032 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdce6d6c-a4ea-49a8-bef4-da390b678b24" containerName="util" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.423245 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdce6d6c-a4ea-49a8-bef4-da390b678b24" containerName="util" Dec 11 10:07:28 crc kubenswrapper[4746]: E1211 10:07:28.423312 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdce6d6c-a4ea-49a8-bef4-da390b678b24" containerName="extract" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.423362 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdce6d6c-a4ea-49a8-bef4-da390b678b24" containerName="extract" Dec 11 10:07:28 crc kubenswrapper[4746]: E1211 10:07:28.423418 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdce6d6c-a4ea-49a8-bef4-da390b678b24" containerName="pull" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.423473 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdce6d6c-a4ea-49a8-bef4-da390b678b24" containerName="pull" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.423632 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdce6d6c-a4ea-49a8-bef4-da390b678b24" containerName="extract" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.424082 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.426326 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-6w2ml" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.454136 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h"] Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.615466 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kglv4\" (UniqueName: \"kubernetes.io/projected/a9a072f8-67ea-43f6-a43a-1f553a050f11-kube-api-access-kglv4\") pod \"openstack-operator-controller-operator-8bb46fc5c-mr76h\" (UID: \"a9a072f8-67ea-43f6-a43a-1f553a050f11\") " pod="openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.717161 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kglv4\" (UniqueName: \"kubernetes.io/projected/a9a072f8-67ea-43f6-a43a-1f553a050f11-kube-api-access-kglv4\") pod \"openstack-operator-controller-operator-8bb46fc5c-mr76h\" (UID: \"a9a072f8-67ea-43f6-a43a-1f553a050f11\") " pod="openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.739003 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kglv4\" (UniqueName: \"kubernetes.io/projected/a9a072f8-67ea-43f6-a43a-1f553a050f11-kube-api-access-kglv4\") pod \"openstack-operator-controller-operator-8bb46fc5c-mr76h\" (UID: \"a9a072f8-67ea-43f6-a43a-1f553a050f11\") " pod="openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h" Dec 11 10:07:28 crc kubenswrapper[4746]: I1211 10:07:28.745350 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h" Dec 11 10:07:29 crc kubenswrapper[4746]: I1211 10:07:29.002952 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h"] Dec 11 10:07:29 crc kubenswrapper[4746]: I1211 10:07:29.343785 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h" event={"ID":"a9a072f8-67ea-43f6-a43a-1f553a050f11","Type":"ContainerStarted","Data":"00b4259c678109b3f504461f0ce18cf6f6594255444f4a6601109633c376ad5b"} Dec 11 10:07:29 crc kubenswrapper[4746]: I1211 10:07:29.878212 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:07:29 crc kubenswrapper[4746]: I1211 10:07:29.878301 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:07:34 crc kubenswrapper[4746]: I1211 10:07:34.381830 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h" event={"ID":"a9a072f8-67ea-43f6-a43a-1f553a050f11","Type":"ContainerStarted","Data":"15e665df54d7be59fc7cfec1a4d68234587ac0eaa4cf292ff11d04d8d18c0abd"} Dec 11 10:07:34 crc kubenswrapper[4746]: I1211 10:07:34.382321 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h" Dec 11 10:07:34 crc kubenswrapper[4746]: I1211 10:07:34.414689 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h" podStartSLOduration=2.102771733 podStartE2EDuration="6.41467072s" podCreationTimestamp="2025-12-11 10:07:28 +0000 UTC" firstStartedPulling="2025-12-11 10:07:29.018527074 +0000 UTC m=+821.878390387" lastFinishedPulling="2025-12-11 10:07:33.330426071 +0000 UTC m=+826.190289374" observedRunningTime="2025-12-11 10:07:34.411649838 +0000 UTC m=+827.271513161" watchObservedRunningTime="2025-12-11 10:07:34.41467072 +0000 UTC m=+827.274534033" Dec 11 10:07:38 crc kubenswrapper[4746]: I1211 10:07:38.749165 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-8bb46fc5c-mr76h" Dec 11 10:07:59 crc kubenswrapper[4746]: I1211 10:07:59.878254 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:07:59 crc kubenswrapper[4746]: I1211 10:07:59.878833 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:07:59 crc kubenswrapper[4746]: I1211 10:07:59.878878 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:07:59 crc kubenswrapper[4746]: I1211 10:07:59.879609 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6d9fcc5a422995e470abf6d012848f503e289189e1935c89236af0c3efd0b192"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:07:59 crc kubenswrapper[4746]: I1211 10:07:59.879679 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://6d9fcc5a422995e470abf6d012848f503e289189e1935c89236af0c3efd0b192" gracePeriod=600 Dec 11 10:08:00 crc kubenswrapper[4746]: I1211 10:08:00.548457 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="6d9fcc5a422995e470abf6d012848f503e289189e1935c89236af0c3efd0b192" exitCode=0 Dec 11 10:08:00 crc kubenswrapper[4746]: I1211 10:08:00.548542 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"6d9fcc5a422995e470abf6d012848f503e289189e1935c89236af0c3efd0b192"} Dec 11 10:08:00 crc kubenswrapper[4746]: I1211 10:08:00.548989 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"f3599b9865470e5f66c552862b8f5ba28a4b29a63faedd683cf231e8c14b3f2f"} Dec 11 10:08:00 crc kubenswrapper[4746]: I1211 10:08:00.549021 4746 scope.go:117] "RemoveContainer" containerID="83dcd2da292677c5018ad8d1c74c0fb581818e298e9cb9996f2d7ffb5e3102ac" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.463769 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5rmqc"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.465901 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.478809 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rmqc"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.653662 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50587a78-88d9-43a1-98d8-8b7941be4600-catalog-content\") pod \"community-operators-5rmqc\" (UID: \"50587a78-88d9-43a1-98d8-8b7941be4600\") " pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.653722 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50587a78-88d9-43a1-98d8-8b7941be4600-utilities\") pod \"community-operators-5rmqc\" (UID: \"50587a78-88d9-43a1-98d8-8b7941be4600\") " pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.653750 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f24lt\" (UniqueName: \"kubernetes.io/projected/50587a78-88d9-43a1-98d8-8b7941be4600-kube-api-access-f24lt\") pod \"community-operators-5rmqc\" (UID: \"50587a78-88d9-43a1-98d8-8b7941be4600\") " pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.755170 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f24lt\" (UniqueName: \"kubernetes.io/projected/50587a78-88d9-43a1-98d8-8b7941be4600-kube-api-access-f24lt\") pod \"community-operators-5rmqc\" (UID: \"50587a78-88d9-43a1-98d8-8b7941be4600\") " pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.756451 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50587a78-88d9-43a1-98d8-8b7941be4600-catalog-content\") pod \"community-operators-5rmqc\" (UID: \"50587a78-88d9-43a1-98d8-8b7941be4600\") " pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.756548 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50587a78-88d9-43a1-98d8-8b7941be4600-utilities\") pod \"community-operators-5rmqc\" (UID: \"50587a78-88d9-43a1-98d8-8b7941be4600\") " pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.757081 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50587a78-88d9-43a1-98d8-8b7941be4600-utilities\") pod \"community-operators-5rmqc\" (UID: \"50587a78-88d9-43a1-98d8-8b7941be4600\") " pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.757825 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50587a78-88d9-43a1-98d8-8b7941be4600-catalog-content\") pod \"community-operators-5rmqc\" (UID: \"50587a78-88d9-43a1-98d8-8b7941be4600\") " pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.794840 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f24lt\" (UniqueName: \"kubernetes.io/projected/50587a78-88d9-43a1-98d8-8b7941be4600-kube-api-access-f24lt\") pod \"community-operators-5rmqc\" (UID: \"50587a78-88d9-43a1-98d8-8b7941be4600\") " pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.855563 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.856716 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.858228 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqr8l\" (UniqueName: \"kubernetes.io/projected/b0e6f3b3-a8b7-4bca-8e55-118bd35a9635-kube-api-access-zqr8l\") pod \"cinder-operator-controller-manager-6c677c69b-glgvn\" (UID: \"b0e6f3b3-a8b7-4bca-8e55-118bd35a9635\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.859485 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-gwjz5" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.864246 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.865707 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.867647 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-xhwgk" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.879606 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.888080 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.889354 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.892230 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-4td4f" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.895325 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.903303 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.905792 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.910682 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ldc7r" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.921608 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.934326 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.959463 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqr8l\" (UniqueName: \"kubernetes.io/projected/b0e6f3b3-a8b7-4bca-8e55-118bd35a9635-kube-api-access-zqr8l\") pod \"cinder-operator-controller-manager-6c677c69b-glgvn\" (UID: \"b0e6f3b3-a8b7-4bca-8e55-118bd35a9635\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.962217 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.968342 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.972437 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-8ctj9" Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.981115 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk"] Dec 11 10:08:13 crc kubenswrapper[4746]: I1211 10:08:13.991786 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqr8l\" (UniqueName: \"kubernetes.io/projected/b0e6f3b3-a8b7-4bca-8e55-118bd35a9635-kube-api-access-zqr8l\") pod \"cinder-operator-controller-manager-6c677c69b-glgvn\" (UID: \"b0e6f3b3-a8b7-4bca-8e55-118bd35a9635\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.042186 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.044158 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.052383 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-xrz9q" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.061379 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlbqk\" (UniqueName: \"kubernetes.io/projected/f9f2bc47-53f4-4216-8fb2-27f2db87123e-kube-api-access-zlbqk\") pod \"designate-operator-controller-manager-697fb699cf-bkvrk\" (UID: \"f9f2bc47-53f4-4216-8fb2-27f2db87123e\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.061492 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtpf\" (UniqueName: \"kubernetes.io/projected/3b4184f0-3e35-4e70-9adc-87a4681c343c-kube-api-access-fhtpf\") pod \"barbican-operator-controller-manager-7d9dfd778-7dcw7\" (UID: \"3b4184f0-3e35-4e70-9adc-87a4681c343c\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.061587 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjfpj\" (UniqueName: \"kubernetes.io/projected/01451fbe-7fd7-447c-b6ef-967f7ddff94b-kube-api-access-qjfpj\") pod \"glance-operator-controller-manager-5697bb5779-5gjg9\" (UID: \"01451fbe-7fd7-447c-b6ef-967f7ddff94b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.073902 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.085171 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.092919 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.094088 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.096770 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-zstsf"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.098126 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.100096 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.100309 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-dh69n" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.110914 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-gst2b" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.118162 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.139864 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-zstsf"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.163163 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjfpj\" (UniqueName: \"kubernetes.io/projected/01451fbe-7fd7-447c-b6ef-967f7ddff94b-kube-api-access-qjfpj\") pod \"glance-operator-controller-manager-5697bb5779-5gjg9\" (UID: \"01451fbe-7fd7-447c-b6ef-967f7ddff94b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.163274 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf6sr\" (UniqueName: \"kubernetes.io/projected/2d353fc2-d0c0-47ed-be04-acc87fd980a7-kube-api-access-zf6sr\") pod \"heat-operator-controller-manager-5f64f6f8bb-dntwk\" (UID: \"2d353fc2-d0c0-47ed-be04-acc87fd980a7\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.163329 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxsd2\" (UniqueName: \"kubernetes.io/projected/fecc9092-bba1-4488-af41-3d970dba0968-kube-api-access-qxsd2\") pod \"horizon-operator-controller-manager-68c6d99b8f-7ftgl\" (UID: \"fecc9092-bba1-4488-af41-3d970dba0968\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.163388 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlbqk\" (UniqueName: \"kubernetes.io/projected/f9f2bc47-53f4-4216-8fb2-27f2db87123e-kube-api-access-zlbqk\") pod \"designate-operator-controller-manager-697fb699cf-bkvrk\" (UID: \"f9f2bc47-53f4-4216-8fb2-27f2db87123e\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.163473 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtpf\" (UniqueName: \"kubernetes.io/projected/3b4184f0-3e35-4e70-9adc-87a4681c343c-kube-api-access-fhtpf\") pod \"barbican-operator-controller-manager-7d9dfd778-7dcw7\" (UID: \"3b4184f0-3e35-4e70-9adc-87a4681c343c\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.184209 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.212356 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtpf\" (UniqueName: \"kubernetes.io/projected/3b4184f0-3e35-4e70-9adc-87a4681c343c-kube-api-access-fhtpf\") pod \"barbican-operator-controller-manager-7d9dfd778-7dcw7\" (UID: \"3b4184f0-3e35-4e70-9adc-87a4681c343c\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.215701 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjfpj\" (UniqueName: \"kubernetes.io/projected/01451fbe-7fd7-447c-b6ef-967f7ddff94b-kube-api-access-qjfpj\") pod \"glance-operator-controller-manager-5697bb5779-5gjg9\" (UID: \"01451fbe-7fd7-447c-b6ef-967f7ddff94b\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.230628 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.236126 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlbqk\" (UniqueName: \"kubernetes.io/projected/f9f2bc47-53f4-4216-8fb2-27f2db87123e-kube-api-access-zlbqk\") pod \"designate-operator-controller-manager-697fb699cf-bkvrk\" (UID: \"f9f2bc47-53f4-4216-8fb2-27f2db87123e\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.251117 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.252440 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.262000 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-ghl9k" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.265534 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.265593 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szq8n\" (UniqueName: \"kubernetes.io/projected/4a11cb95-3107-4526-8ab3-82bb6fd57cef-kube-api-access-szq8n\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.265639 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwq7r\" (UniqueName: \"kubernetes.io/projected/b47efaee-6921-4f8b-876a-3cf52bd10a27-kube-api-access-vwq7r\") pod \"ironic-operator-controller-manager-967d97867-zstsf\" (UID: \"b47efaee-6921-4f8b-876a-3cf52bd10a27\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.265667 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf6sr\" (UniqueName: \"kubernetes.io/projected/2d353fc2-d0c0-47ed-be04-acc87fd980a7-kube-api-access-zf6sr\") pod \"heat-operator-controller-manager-5f64f6f8bb-dntwk\" (UID: \"2d353fc2-d0c0-47ed-be04-acc87fd980a7\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.265688 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxsd2\" (UniqueName: \"kubernetes.io/projected/fecc9092-bba1-4488-af41-3d970dba0968-kube-api-access-qxsd2\") pod \"horizon-operator-controller-manager-68c6d99b8f-7ftgl\" (UID: \"fecc9092-bba1-4488-af41-3d970dba0968\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.284181 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.285379 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.294783 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rxk5f" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.320084 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxsd2\" (UniqueName: \"kubernetes.io/projected/fecc9092-bba1-4488-af41-3d970dba0968-kube-api-access-qxsd2\") pod \"horizon-operator-controller-manager-68c6d99b8f-7ftgl\" (UID: \"fecc9092-bba1-4488-af41-3d970dba0968\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.320147 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.352739 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf6sr\" (UniqueName: \"kubernetes.io/projected/2d353fc2-d0c0-47ed-be04-acc87fd980a7-kube-api-access-zf6sr\") pod \"heat-operator-controller-manager-5f64f6f8bb-dntwk\" (UID: \"2d353fc2-d0c0-47ed-be04-acc87fd980a7\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.352809 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.353982 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.358965 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-g2lj8" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.370648 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.446312 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.447678 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.447816 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szq8n\" (UniqueName: \"kubernetes.io/projected/4a11cb95-3107-4526-8ab3-82bb6fd57cef-kube-api-access-szq8n\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.447894 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljtbr\" (UniqueName: \"kubernetes.io/projected/252b923b-a265-46c1-8c3e-9ef62d5b1f7a-kube-api-access-ljtbr\") pod \"manila-operator-controller-manager-5b5fd79c9c-8pn4t\" (UID: \"252b923b-a265-46c1-8c3e-9ef62d5b1f7a\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.448198 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwq7r\" (UniqueName: \"kubernetes.io/projected/b47efaee-6921-4f8b-876a-3cf52bd10a27-kube-api-access-vwq7r\") pod \"ironic-operator-controller-manager-967d97867-zstsf\" (UID: \"b47efaee-6921-4f8b-876a-3cf52bd10a27\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.448235 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bgbp\" (UniqueName: \"kubernetes.io/projected/9b6740eb-7439-465a-b30a-c838a4d65be6-kube-api-access-9bgbp\") pod \"keystone-operator-controller-manager-7765d96ddf-x5ghz\" (UID: \"9b6740eb-7439-465a-b30a-c838a4d65be6\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.448357 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95dxw\" (UniqueName: \"kubernetes.io/projected/4c437995-b526-4ae3-9956-b541694d54d4-kube-api-access-95dxw\") pod \"mariadb-operator-controller-manager-79c8c4686c-wd9vj\" (UID: \"4c437995-b526-4ae3-9956-b541694d54d4\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" Dec 11 10:08:14 crc kubenswrapper[4746]: E1211 10:08:14.470121 4746 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:08:14 crc kubenswrapper[4746]: E1211 10:08:14.470228 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert podName:4a11cb95-3107-4526-8ab3-82bb6fd57cef nodeName:}" failed. No retries permitted until 2025-12-11 10:08:14.97020635 +0000 UTC m=+867.830069663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert") pod "infra-operator-controller-manager-78d48bff9d-n8c44" (UID: "4a11cb95-3107-4526-8ab3-82bb6fd57cef") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.493949 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.513274 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.532546 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwq7r\" (UniqueName: \"kubernetes.io/projected/b47efaee-6921-4f8b-876a-3cf52bd10a27-kube-api-access-vwq7r\") pod \"ironic-operator-controller-manager-967d97867-zstsf\" (UID: \"b47efaee-6921-4f8b-876a-3cf52bd10a27\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.533010 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.534003 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szq8n\" (UniqueName: \"kubernetes.io/projected/4a11cb95-3107-4526-8ab3-82bb6fd57cef-kube-api-access-szq8n\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.585478 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bgbp\" (UniqueName: \"kubernetes.io/projected/9b6740eb-7439-465a-b30a-c838a4d65be6-kube-api-access-9bgbp\") pod \"keystone-operator-controller-manager-7765d96ddf-x5ghz\" (UID: \"9b6740eb-7439-465a-b30a-c838a4d65be6\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.585706 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95dxw\" (UniqueName: \"kubernetes.io/projected/4c437995-b526-4ae3-9956-b541694d54d4-kube-api-access-95dxw\") pod \"mariadb-operator-controller-manager-79c8c4686c-wd9vj\" (UID: \"4c437995-b526-4ae3-9956-b541694d54d4\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.586063 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljtbr\" (UniqueName: \"kubernetes.io/projected/252b923b-a265-46c1-8c3e-9ef62d5b1f7a-kube-api-access-ljtbr\") pod \"manila-operator-controller-manager-5b5fd79c9c-8pn4t\" (UID: \"252b923b-a265-46c1-8c3e-9ef62d5b1f7a\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.620339 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.720126 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.722433 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95dxw\" (UniqueName: \"kubernetes.io/projected/4c437995-b526-4ae3-9956-b541694d54d4-kube-api-access-95dxw\") pod \"mariadb-operator-controller-manager-79c8c4686c-wd9vj\" (UID: \"4c437995-b526-4ae3-9956-b541694d54d4\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.728286 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.746438 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-pgd2p" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.747156 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bgbp\" (UniqueName: \"kubernetes.io/projected/9b6740eb-7439-465a-b30a-c838a4d65be6-kube-api-access-9bgbp\") pod \"keystone-operator-controller-manager-7765d96ddf-x5ghz\" (UID: \"9b6740eb-7439-465a-b30a-c838a4d65be6\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.763240 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.764669 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.768701 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljtbr\" (UniqueName: \"kubernetes.io/projected/252b923b-a265-46c1-8c3e-9ef62d5b1f7a-kube-api-access-ljtbr\") pod \"manila-operator-controller-manager-5b5fd79c9c-8pn4t\" (UID: \"252b923b-a265-46c1-8c3e-9ef62d5b1f7a\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.781528 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.784570 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.796733 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.808095 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-2x8vd" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.818769 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.836864 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.838703 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.840619 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv4qt\" (UniqueName: \"kubernetes.io/projected/efe16578-2d6a-40a9-9f8c-9b868a6d6a66-kube-api-access-sv4qt\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-vr9wq\" (UID: \"efe16578-2d6a-40a9-9f8c-9b868a6d6a66\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.840756 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjcss\" (UniqueName: \"kubernetes.io/projected/5d8442a7-c511-4f69-b04e-45e750f27bfa-kube-api-access-jjcss\") pod \"nova-operator-controller-manager-697bc559fc-5c8g4\" (UID: \"5d8442a7-c511-4f69-b04e-45e750f27bfa\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.857761 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dp5j7" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.869484 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.894163 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.896109 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.919853 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-m7jpf" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.932532 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.934884 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.935076 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.946407 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdbbq\" (UniqueName: \"kubernetes.io/projected/9ea7dd8b-4871-43c0-a66f-113742627a6b-kube-api-access-kdbbq\") pod \"octavia-operator-controller-manager-998648c74-g9qlp\" (UID: \"9ea7dd8b-4871-43c0-a66f-113742627a6b\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.946479 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.946525 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b44b8\" (UniqueName: \"kubernetes.io/projected/648adc18-f046-4dcf-9a52-c69946ffa83a-kube-api-access-b44b8\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.946564 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv4qt\" (UniqueName: \"kubernetes.io/projected/efe16578-2d6a-40a9-9f8c-9b868a6d6a66-kube-api-access-sv4qt\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-vr9wq\" (UID: \"efe16578-2d6a-40a9-9f8c-9b868a6d6a66\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.946620 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjcss\" (UniqueName: \"kubernetes.io/projected/5d8442a7-c511-4f69-b04e-45e750f27bfa-kube-api-access-jjcss\") pod \"nova-operator-controller-manager-697bc559fc-5c8g4\" (UID: \"5d8442a7-c511-4f69-b04e-45e750f27bfa\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.967147 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-882l4"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.969098 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.974646 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-5kn7q" Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.988158 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r"] Dec 11 10:08:14 crc kubenswrapper[4746]: I1211 10:08:14.988779 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.010129 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv4qt\" (UniqueName: \"kubernetes.io/projected/efe16578-2d6a-40a9-9f8c-9b868a6d6a66-kube-api-access-sv4qt\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-vr9wq\" (UID: \"efe16578-2d6a-40a9-9f8c-9b868a6d6a66\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.011900 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.031591 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.033810 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.033816 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-n77ns" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.043069 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-h5n22" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.048032 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdbbq\" (UniqueName: \"kubernetes.io/projected/9ea7dd8b-4871-43c0-a66f-113742627a6b-kube-api-access-kdbbq\") pod \"octavia-operator-controller-manager-998648c74-g9qlp\" (UID: \"9ea7dd8b-4871-43c0-a66f-113742627a6b\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.048167 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.048292 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b44b8\" (UniqueName: \"kubernetes.io/projected/648adc18-f046-4dcf-9a52-c69946ffa83a-kube-api-access-b44b8\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.048346 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mfw6\" (UniqueName: \"kubernetes.io/projected/798a9e32-0bc8-4231-834a-fc2b002c87aa-kube-api-access-9mfw6\") pod \"ovn-operator-controller-manager-b6456fdb6-kklt5\" (UID: \"798a9e32-0bc8-4231-834a-fc2b002c87aa\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.048387 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v68vs\" (UniqueName: \"kubernetes.io/projected/778c0ffc-7a48-4159-8a1b-f34a805bc1ae-kube-api-access-v68vs\") pod \"placement-operator-controller-manager-78f8948974-882l4\" (UID: \"778c0ffc-7a48-4159-8a1b-f34a805bc1ae\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.048433 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2kkh\" (UniqueName: \"kubernetes.io/projected/3b8201ce-fb41-4474-9609-689fe0d093ec-kube-api-access-x2kkh\") pod \"swift-operator-controller-manager-9d58d64bc-b797r\" (UID: \"3b8201ce-fb41-4474-9609-689fe0d093ec\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.048529 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:15 crc kubenswrapper[4746]: E1211 10:08:15.048716 4746 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:08:15 crc kubenswrapper[4746]: E1211 10:08:15.048776 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert podName:4a11cb95-3107-4526-8ab3-82bb6fd57cef nodeName:}" failed. No retries permitted until 2025-12-11 10:08:16.048758367 +0000 UTC m=+868.908621680 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert") pod "infra-operator-controller-manager-78d48bff9d-n8c44" (UID: "4a11cb95-3107-4526-8ab3-82bb6fd57cef") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:08:15 crc kubenswrapper[4746]: E1211 10:08:15.049316 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:08:15 crc kubenswrapper[4746]: E1211 10:08:15.049342 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert podName:648adc18-f046-4dcf-9a52-c69946ffa83a nodeName:}" failed. No retries permitted until 2025-12-11 10:08:15.549334652 +0000 UTC m=+868.409197965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8tnmf" (UID: "648adc18-f046-4dcf-9a52-c69946ffa83a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.053526 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjcss\" (UniqueName: \"kubernetes.io/projected/5d8442a7-c511-4f69-b04e-45e750f27bfa-kube-api-access-jjcss\") pod \"nova-operator-controller-manager-697bc559fc-5c8g4\" (UID: \"5d8442a7-c511-4f69-b04e-45e750f27bfa\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.060591 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.062239 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.063640 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.101436 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-882l4"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.106919 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-fv6lx" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.150067 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nhnn\" (UniqueName: \"kubernetes.io/projected/4890e377-1482-4341-b002-bb54e05d5ded-kube-api-access-6nhnn\") pod \"telemetry-operator-controller-manager-58d5ff84df-kz5f2\" (UID: \"4890e377-1482-4341-b002-bb54e05d5ded\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.158653 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mfw6\" (UniqueName: \"kubernetes.io/projected/798a9e32-0bc8-4231-834a-fc2b002c87aa-kube-api-access-9mfw6\") pod \"ovn-operator-controller-manager-b6456fdb6-kklt5\" (UID: \"798a9e32-0bc8-4231-834a-fc2b002c87aa\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.158733 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v68vs\" (UniqueName: \"kubernetes.io/projected/778c0ffc-7a48-4159-8a1b-f34a805bc1ae-kube-api-access-v68vs\") pod \"placement-operator-controller-manager-78f8948974-882l4\" (UID: \"778c0ffc-7a48-4159-8a1b-f34a805bc1ae\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.158809 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2kkh\" (UniqueName: \"kubernetes.io/projected/3b8201ce-fb41-4474-9609-689fe0d093ec-kube-api-access-x2kkh\") pod \"swift-operator-controller-manager-9d58d64bc-b797r\" (UID: \"3b8201ce-fb41-4474-9609-689fe0d093ec\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.162115 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b44b8\" (UniqueName: \"kubernetes.io/projected/648adc18-f046-4dcf-9a52-c69946ffa83a-kube-api-access-b44b8\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.193763 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.194634 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.195621 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.197193 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.201500 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-nrb6g" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.204095 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdbbq\" (UniqueName: \"kubernetes.io/projected/9ea7dd8b-4871-43c0-a66f-113742627a6b-kube-api-access-kdbbq\") pod \"octavia-operator-controller-manager-998648c74-g9qlp\" (UID: \"9ea7dd8b-4871-43c0-a66f-113742627a6b\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.213130 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.221769 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.227217 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.261079 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2kkh\" (UniqueName: \"kubernetes.io/projected/3b8201ce-fb41-4474-9609-689fe0d093ec-kube-api-access-x2kkh\") pod \"swift-operator-controller-manager-9d58d64bc-b797r\" (UID: \"3b8201ce-fb41-4474-9609-689fe0d093ec\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.261441 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6nhnn\" (UniqueName: \"kubernetes.io/projected/4890e377-1482-4341-b002-bb54e05d5ded-kube-api-access-6nhnn\") pod \"telemetry-operator-controller-manager-58d5ff84df-kz5f2\" (UID: \"4890e377-1482-4341-b002-bb54e05d5ded\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.261536 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9fzs\" (UniqueName: \"kubernetes.io/projected/00e181e7-8b84-49f6-96c5-4da046644469-kube-api-access-m9fzs\") pod \"test-operator-controller-manager-5854674fcc-zbwxc\" (UID: \"00e181e7-8b84-49f6-96c5-4da046644469\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.264777 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mfw6\" (UniqueName: \"kubernetes.io/projected/798a9e32-0bc8-4231-834a-fc2b002c87aa-kube-api-access-9mfw6\") pod \"ovn-operator-controller-manager-b6456fdb6-kklt5\" (UID: \"798a9e32-0bc8-4231-834a-fc2b002c87aa\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.265839 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v68vs\" (UniqueName: \"kubernetes.io/projected/778c0ffc-7a48-4159-8a1b-f34a805bc1ae-kube-api-access-v68vs\") pod \"placement-operator-controller-manager-78f8948974-882l4\" (UID: \"778c0ffc-7a48-4159-8a1b-f34a805bc1ae\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.283142 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.284601 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.287428 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-v4jtr" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.346790 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.360792 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.363312 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9fzs\" (UniqueName: \"kubernetes.io/projected/00e181e7-8b84-49f6-96c5-4da046644469-kube-api-access-m9fzs\") pod \"test-operator-controller-manager-5854674fcc-zbwxc\" (UID: \"00e181e7-8b84-49f6-96c5-4da046644469\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.376805 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrt5\" (UniqueName: \"kubernetes.io/projected/5f3fcc59-b850-4041-84b3-9ccc788c73fc-kube-api-access-kxrt5\") pod \"watcher-operator-controller-manager-75944c9b7-sg2q4\" (UID: \"5f3fcc59-b850-4041-84b3-9ccc788c73fc\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.381021 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.381928 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.387891 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.389362 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.389506 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-bns66" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.418867 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6nhnn\" (UniqueName: \"kubernetes.io/projected/4890e377-1482-4341-b002-bb54e05d5ded-kube-api-access-6nhnn\") pod \"telemetry-operator-controller-manager-58d5ff84df-kz5f2\" (UID: \"4890e377-1482-4341-b002-bb54e05d5ded\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.431661 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9fzs\" (UniqueName: \"kubernetes.io/projected/00e181e7-8b84-49f6-96c5-4da046644469-kube-api-access-m9fzs\") pod \"test-operator-controller-manager-5854674fcc-zbwxc\" (UID: \"00e181e7-8b84-49f6-96c5-4da046644469\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.452921 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.479014 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.484368 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.484469 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmrcg\" (UniqueName: \"kubernetes.io/projected/5d1a162f-09fe-4a7a-854e-3236282b3189-kube-api-access-jmrcg\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.484552 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxrt5\" (UniqueName: \"kubernetes.io/projected/5f3fcc59-b850-4041-84b3-9ccc788c73fc-kube-api-access-kxrt5\") pod \"watcher-operator-controller-manager-75944c9b7-sg2q4\" (UID: \"5f3fcc59-b850-4041-84b3-9ccc788c73fc\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.527000 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxrt5\" (UniqueName: \"kubernetes.io/projected/5f3fcc59-b850-4041-84b3-9ccc788c73fc-kube-api-access-kxrt5\") pod \"watcher-operator-controller-manager-75944c9b7-sg2q4\" (UID: \"5f3fcc59-b850-4041-84b3-9ccc788c73fc\") " pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.527292 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.534469 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.575634 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.588660 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.588813 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.588881 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmrcg\" (UniqueName: \"kubernetes.io/projected/5d1a162f-09fe-4a7a-854e-3236282b3189-kube-api-access-jmrcg\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.588953 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:15 crc kubenswrapper[4746]: E1211 10:08:15.589204 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:08:15 crc kubenswrapper[4746]: E1211 10:08:15.589320 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert podName:648adc18-f046-4dcf-9a52-c69946ffa83a nodeName:}" failed. No retries permitted until 2025-12-11 10:08:16.589296916 +0000 UTC m=+869.449160229 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8tnmf" (UID: "648adc18-f046-4dcf-9a52-c69946ffa83a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:08:15 crc kubenswrapper[4746]: E1211 10:08:15.591340 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:08:15 crc kubenswrapper[4746]: E1211 10:08:15.591390 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:08:15 crc kubenswrapper[4746]: E1211 10:08:15.594292 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:16.09425852 +0000 UTC m=+868.954121833 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "webhook-server-cert" not found Dec 11 10:08:15 crc kubenswrapper[4746]: E1211 10:08:15.594387 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:16.094363613 +0000 UTC m=+868.954226926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "metrics-server-cert" not found Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.599485 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.600834 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.604066 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-wxf57" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.606021 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.626634 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4"] Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.633663 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmrcg\" (UniqueName: \"kubernetes.io/projected/5d1a162f-09fe-4a7a-854e-3236282b3189-kube-api-access-jmrcg\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.638171 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.699364 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjg4d\" (UniqueName: \"kubernetes.io/projected/23f6b30a-57a8-4920-ab2e-dfebef4d9ce6-kube-api-access-bjg4d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rq8z4\" (UID: \"23f6b30a-57a8-4920-ab2e-dfebef4d9ce6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.799957 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.801508 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjg4d\" (UniqueName: \"kubernetes.io/projected/23f6b30a-57a8-4920-ab2e-dfebef4d9ce6-kube-api-access-bjg4d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rq8z4\" (UID: \"23f6b30a-57a8-4920-ab2e-dfebef4d9ce6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" Dec 11 10:08:15 crc kubenswrapper[4746]: I1211 10:08:15.832869 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjg4d\" (UniqueName: \"kubernetes.io/projected/23f6b30a-57a8-4920-ab2e-dfebef4d9ce6-kube-api-access-bjg4d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-rq8z4\" (UID: \"23f6b30a-57a8-4920-ab2e-dfebef4d9ce6\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.004879 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" Dec 11 10:08:16 crc kubenswrapper[4746]: E1211 10:08:16.108079 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:08:16 crc kubenswrapper[4746]: E1211 10:08:16.108512 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:17.108484017 +0000 UTC m=+869.968347320 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "metrics-server-cert" not found Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.107869 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.108943 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.109058 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:16 crc kubenswrapper[4746]: E1211 10:08:16.109217 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:08:16 crc kubenswrapper[4746]: E1211 10:08:16.109318 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:17.109290929 +0000 UTC m=+869.969154422 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "webhook-server-cert" not found Dec 11 10:08:16 crc kubenswrapper[4746]: E1211 10:08:16.109408 4746 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:08:16 crc kubenswrapper[4746]: E1211 10:08:16.109451 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert podName:4a11cb95-3107-4526-8ab3-82bb6fd57cef nodeName:}" failed. No retries permitted until 2025-12-11 10:08:18.109441293 +0000 UTC m=+870.969304786 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert") pod "infra-operator-controller-manager-78d48bff9d-n8c44" (UID: "4a11cb95-3107-4526-8ab3-82bb6fd57cef") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.434195 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn"] Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.441211 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rmqc"] Dec 11 10:08:16 crc kubenswrapper[4746]: W1211 10:08:16.464934 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50587a78_88d9_43a1_98d8_8b7941be4600.slice/crio-c322646e7e154ad423d717d2f8dd2beb2fc0e08d8d4e83abc9b9d7956feec477 WatchSource:0}: Error finding container c322646e7e154ad423d717d2f8dd2beb2fc0e08d8d4e83abc9b9d7956feec477: Status 404 returned error can't find the container with id c322646e7e154ad423d717d2f8dd2beb2fc0e08d8d4e83abc9b9d7956feec477 Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.636442 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:16 crc kubenswrapper[4746]: E1211 10:08:16.636707 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:08:16 crc kubenswrapper[4746]: E1211 10:08:16.636790 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert podName:648adc18-f046-4dcf-9a52-c69946ffa83a nodeName:}" failed. No retries permitted until 2025-12-11 10:08:18.636761404 +0000 UTC m=+871.496624717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8tnmf" (UID: "648adc18-f046-4dcf-9a52-c69946ffa83a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.740693 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp"] Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.756795 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r"] Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.787376 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" event={"ID":"b0e6f3b3-a8b7-4bca-8e55-118bd35a9635","Type":"ContainerStarted","Data":"f648d8a69b0a2c948b3b38238c6e86fda5e7a5190cacc3880f29fe557648be90"} Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.789300 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" event={"ID":"3b8201ce-fb41-4474-9609-689fe0d093ec","Type":"ContainerStarted","Data":"af31459e80390d16f371dffe4ed461afb15d008032223238fa5fb47a8473a98c"} Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.790770 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" event={"ID":"9ea7dd8b-4871-43c0-a66f-113742627a6b","Type":"ContainerStarted","Data":"97c1a9ba638edc729122b58a29523baa4c36c99dc0210f03e795bda6e2d3c229"} Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.793029 4746 generic.go:334] "Generic (PLEG): container finished" podID="50587a78-88d9-43a1-98d8-8b7941be4600" containerID="fc2db3e9ad9c1a9f73609c9210439336a77bfe9020842fb7dbdd2659338f488d" exitCode=0 Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.793193 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmqc" event={"ID":"50587a78-88d9-43a1-98d8-8b7941be4600","Type":"ContainerDied","Data":"fc2db3e9ad9c1a9f73609c9210439336a77bfe9020842fb7dbdd2659338f488d"} Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.793271 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmqc" event={"ID":"50587a78-88d9-43a1-98d8-8b7941be4600","Type":"ContainerStarted","Data":"c322646e7e154ad423d717d2f8dd2beb2fc0e08d8d4e83abc9b9d7956feec477"} Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.936789 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-zstsf"] Dec 11 10:08:16 crc kubenswrapper[4746]: I1211 10:08:16.947095 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk"] Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.002149 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7"] Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.021197 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4"] Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.028184 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl"] Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.041632 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq"] Dec 11 10:08:17 crc kubenswrapper[4746]: W1211 10:08:17.076401 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podefe16578_2d6a_40a9_9f8c_9b868a6d6a66.slice/crio-32a8cf9106f3b4bafdce6376ed1f736d0feb5d3f20cc8fb54c6046450c2daf31 WatchSource:0}: Error finding container 32a8cf9106f3b4bafdce6376ed1f736d0feb5d3f20cc8fb54c6046450c2daf31: Status 404 returned error can't find the container with id 32a8cf9106f3b4bafdce6376ed1f736d0feb5d3f20cc8fb54c6046450c2daf31 Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.083916 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9"] Dec 11 10:08:17 crc kubenswrapper[4746]: W1211 10:08:17.105288 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod798a9e32_0bc8_4231_834a_fc2b002c87aa.slice/crio-fec91d36b96e9eca6f7e281b27f2c5ece9ec768167b64dbf9d9aec8547c6274d WatchSource:0}: Error finding container fec91d36b96e9eca6f7e281b27f2c5ece9ec768167b64dbf9d9aec8547c6274d: Status 404 returned error can't find the container with id fec91d36b96e9eca6f7e281b27f2c5ece9ec768167b64dbf9d9aec8547c6274d Dec 11 10:08:17 crc kubenswrapper[4746]: W1211 10:08:17.105990 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b6740eb_7439_465a_b30a_c838a4d65be6.slice/crio-7006f16bfea96e21c031dc819e3e8297ae0cdcf98513fa12362c13c856a9318c WatchSource:0}: Error finding container 7006f16bfea96e21c031dc819e3e8297ae0cdcf98513fa12362c13c856a9318c: Status 404 returned error can't find the container with id 7006f16bfea96e21c031dc819e3e8297ae0cdcf98513fa12362c13c856a9318c Dec 11 10:08:17 crc kubenswrapper[4746]: W1211 10:08:17.116226 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod252b923b_a265_46c1_8c3e_9ef62d5b1f7a.slice/crio-b0328bcd5406d38df361e2a2b918b06fa9c0ebe913512a83a68bf2ad5aba0a3d WatchSource:0}: Error finding container b0328bcd5406d38df361e2a2b918b06fa9c0ebe913512a83a68bf2ad5aba0a3d: Status 404 returned error can't find the container with id b0328bcd5406d38df361e2a2b918b06fa9c0ebe913512a83a68bf2ad5aba0a3d Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.116302 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz"] Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.118906 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bgbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-x5ghz_openstack-operators(9b6740eb-7439-465a-b30a-c838a4d65be6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.119039 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6nhnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-kz5f2_openstack-operators(4890e377-1482-4341-b002-bb54e05d5ded): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: W1211 10:08:17.119378 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23f6b30a_57a8_4920_ab2e_dfebef4d9ce6.slice/crio-168b28baa3677741ffd2edd7e38e9d818fd968b2ed0d9fbf59cf6bdea3cef75b WatchSource:0}: Error finding container 168b28baa3677741ffd2edd7e38e9d818fd968b2ed0d9fbf59cf6bdea3cef75b: Status 404 returned error can't find the container with id 168b28baa3677741ffd2edd7e38e9d818fd968b2ed0d9fbf59cf6bdea3cef75b Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.123171 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9bgbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-x5ghz_openstack-operators(9b6740eb-7439-465a-b30a-c838a4d65be6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.123275 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6nhnn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-kz5f2_openstack-operators(4890e377-1482-4341-b002-bb54e05d5ded): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.125801 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" podUID="4890e377-1482-4341-b002-bb54e05d5ded" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.125898 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" podUID="9b6740eb-7439-465a-b30a-c838a4d65be6" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.126159 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljtbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-8pn4t_openstack-operators(252b923b-a265-46c1-8c3e-9ef62d5b1f7a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.127341 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjg4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-rq8z4_openstack-operators(23f6b30a-57a8-4920-ab2e-dfebef4d9ce6): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.128528 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" podUID="23f6b30a-57a8-4920-ab2e-dfebef4d9ce6" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.128791 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljtbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-8pn4t_openstack-operators(252b923b-a265-46c1-8c3e-9ef62d5b1f7a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.130170 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" podUID="252b923b-a265-46c1-8c3e-9ef62d5b1f7a" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.134334 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m9fzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-zbwxc_openstack-operators(00e181e7-8b84-49f6-96c5-4da046644469): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.135345 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2"] Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.136809 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m9fzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-zbwxc_openstack-operators(00e181e7-8b84-49f6-96c5-4da046644469): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.138653 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" podUID="00e181e7-8b84-49f6-96c5-4da046644469" Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.148670 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5"] Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.151291 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.151426 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.151650 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.151747 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.151773 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:19.151723452 +0000 UTC m=+872.011586765 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "metrics-server-cert" not found Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.151851 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:19.151824154 +0000 UTC m=+872.011687467 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "webhook-server-cert" not found Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.177832 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t"] Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.229126 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4"] Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.241540 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc"] Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.250367 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk"] Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.253000 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v68vs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-882l4_openstack-operators(778c0ffc-7a48-4159-8a1b-f34a805bc1ae): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.265246 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v68vs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-882l4_openstack-operators(778c0ffc-7a48-4159-8a1b-f34a805bc1ae): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.266431 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" podUID="778c0ffc-7a48-4159-8a1b-f34a805bc1ae" Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.268675 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-882l4"] Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.269344 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-95dxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-wd9vj_openstack-operators(4c437995-b526-4ae3-9956-b541694d54d4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.270616 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kxrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-sg2q4_openstack-operators(5f3fcc59-b850-4041-84b3-9ccc788c73fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.271015 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-95dxw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-79c8c4686c-wd9vj_openstack-operators(4c437995-b526-4ae3-9956-b541694d54d4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.272329 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" podUID="4c437995-b526-4ae3-9956-b541694d54d4" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.272443 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kxrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-sg2q4_openstack-operators(5f3fcc59-b850-4041-84b3-9ccc788c73fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.273618 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" podUID="5f3fcc59-b850-4041-84b3-9ccc788c73fc" Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.276482 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj"] Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.281698 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4"] Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.820171 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" event={"ID":"5d8442a7-c511-4f69-b04e-45e750f27bfa","Type":"ContainerStarted","Data":"03104f439c1e324eff69fb2cab98582701f5007d673aeb5ad60b21855b2e6dd3"} Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.823602 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" event={"ID":"778c0ffc-7a48-4159-8a1b-f34a805bc1ae","Type":"ContainerStarted","Data":"a487009e4044117487846150bd43cca741afa91c2f7df4f7791f7caa39674093"} Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.839501 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" podUID="778c0ffc-7a48-4159-8a1b-f34a805bc1ae" Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.839760 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" event={"ID":"252b923b-a265-46c1-8c3e-9ef62d5b1f7a","Type":"ContainerStarted","Data":"b0328bcd5406d38df361e2a2b918b06fa9c0ebe913512a83a68bf2ad5aba0a3d"} Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.862258 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" event={"ID":"5f3fcc59-b850-4041-84b3-9ccc788c73fc","Type":"ContainerStarted","Data":"15df8560c29dd98d4d3e85bf8cec0e46c93ef6df42631b0e9dc8133b0d909196"} Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.864973 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" podUID="252b923b-a265-46c1-8c3e-9ef62d5b1f7a" Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.869006 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" event={"ID":"9b6740eb-7439-465a-b30a-c838a4d65be6","Type":"ContainerStarted","Data":"7006f16bfea96e21c031dc819e3e8297ae0cdcf98513fa12362c13c856a9318c"} Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.871449 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" podUID="5f3fcc59-b850-4041-84b3-9ccc788c73fc" Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.872651 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" podUID="9b6740eb-7439-465a-b30a-c838a4d65be6" Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.872914 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" event={"ID":"fecc9092-bba1-4488-af41-3d970dba0968","Type":"ContainerStarted","Data":"a7cfb8924ac6d8c8c2373a870ff6d696aaa54909e6d07ffc8ae04db2f0d674f7"} Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.876017 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" event={"ID":"2d353fc2-d0c0-47ed-be04-acc87fd980a7","Type":"ContainerStarted","Data":"f52a75c6fed7eb3543db65fc05a8bf2c95dc12df6ffd15a953c6959f2405f9ed"} Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.879730 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" event={"ID":"4890e377-1482-4341-b002-bb54e05d5ded","Type":"ContainerStarted","Data":"636ee57654d4dc05ca03da0149ae841c0175aa684e00258a53f65ae7aff9affc"} Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.884180 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" event={"ID":"798a9e32-0bc8-4231-834a-fc2b002c87aa","Type":"ContainerStarted","Data":"fec91d36b96e9eca6f7e281b27f2c5ece9ec768167b64dbf9d9aec8547c6274d"} Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.884572 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" podUID="4890e377-1482-4341-b002-bb54e05d5ded" Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.912421 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" event={"ID":"efe16578-2d6a-40a9-9f8c-9b868a6d6a66","Type":"ContainerStarted","Data":"32a8cf9106f3b4bafdce6376ed1f736d0feb5d3f20cc8fb54c6046450c2daf31"} Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.914750 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" event={"ID":"00e181e7-8b84-49f6-96c5-4da046644469","Type":"ContainerStarted","Data":"a2ead98b904662c7604d4b174d2a5eef3a46a8b0520a0c2d4e5d5d2d7e1c9c05"} Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.918474 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" event={"ID":"f9f2bc47-53f4-4216-8fb2-27f2db87123e","Type":"ContainerStarted","Data":"5dad1ff776af449d8bcd4755eb16b053719207c264428c7df6662820a7a2d2a5"} Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.921327 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" podUID="00e181e7-8b84-49f6-96c5-4da046644469" Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.923813 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" event={"ID":"01451fbe-7fd7-447c-b6ef-967f7ddff94b","Type":"ContainerStarted","Data":"1f3033ab15744244b734b60a30262cf96d57888deb84f05aff002abf88acbfcd"} Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.925613 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" event={"ID":"b47efaee-6921-4f8b-876a-3cf52bd10a27","Type":"ContainerStarted","Data":"28eba633f3fbae0a9275d3dd2fdadd4a77bcbe64e316dbf01f7e48a2f8b2f70e"} Dec 11 10:08:17 crc kubenswrapper[4746]: I1211 10:08:17.939313 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" event={"ID":"23f6b30a-57a8-4920-ab2e-dfebef4d9ce6","Type":"ContainerStarted","Data":"168b28baa3677741ffd2edd7e38e9d818fd968b2ed0d9fbf59cf6bdea3cef75b"} Dec 11 10:08:17 crc kubenswrapper[4746]: E1211 10:08:17.941491 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" podUID="23f6b30a-57a8-4920-ab2e-dfebef4d9ce6" Dec 11 10:08:18 crc kubenswrapper[4746]: I1211 10:08:18.027494 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" event={"ID":"3b4184f0-3e35-4e70-9adc-87a4681c343c","Type":"ContainerStarted","Data":"e08d5f024e0361db0a20a4c35935e687ef39d5767c0e8ebac8250c4cd7bb08bb"} Dec 11 10:08:18 crc kubenswrapper[4746]: I1211 10:08:18.055601 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" event={"ID":"4c437995-b526-4ae3-9956-b541694d54d4","Type":"ContainerStarted","Data":"9d198aa6a325f3342c6816691ac3552df0da1539186c557fe72055c57ebb21e2"} Dec 11 10:08:18 crc kubenswrapper[4746]: E1211 10:08:18.070121 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" podUID="4c437995-b526-4ae3-9956-b541694d54d4" Dec 11 10:08:18 crc kubenswrapper[4746]: I1211 10:08:18.124714 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:18 crc kubenswrapper[4746]: E1211 10:08:18.125397 4746 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:08:18 crc kubenswrapper[4746]: E1211 10:08:18.125455 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert podName:4a11cb95-3107-4526-8ab3-82bb6fd57cef nodeName:}" failed. No retries permitted until 2025-12-11 10:08:22.125437755 +0000 UTC m=+874.985301068 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert") pod "infra-operator-controller-manager-78d48bff9d-n8c44" (UID: "4a11cb95-3107-4526-8ab3-82bb6fd57cef") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:08:18 crc kubenswrapper[4746]: I1211 10:08:18.661120 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:18 crc kubenswrapper[4746]: E1211 10:08:18.661320 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:08:18 crc kubenswrapper[4746]: E1211 10:08:18.661421 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert podName:648adc18-f046-4dcf-9a52-c69946ffa83a nodeName:}" failed. No retries permitted until 2025-12-11 10:08:22.66135577 +0000 UTC m=+875.521219083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8tnmf" (UID: "648adc18-f046-4dcf-9a52-c69946ffa83a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.088048 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" podUID="23f6b30a-57a8-4920-ab2e-dfebef4d9ce6" Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.096423 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" podUID="9b6740eb-7439-465a-b30a-c838a4d65be6" Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.096557 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:424da951f13f1fbe9083215dc9f5088f90676dd813f01fdf3c1a8639b61cbaad\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" podUID="4c437995-b526-4ae3-9956-b541694d54d4" Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.096635 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" podUID="252b923b-a265-46c1-8c3e-9ef62d5b1f7a" Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.096703 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" podUID="778c0ffc-7a48-4159-8a1b-f34a805bc1ae" Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.096733 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" podUID="5f3fcc59-b850-4041-84b3-9ccc788c73fc" Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.098101 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" podUID="00e181e7-8b84-49f6-96c5-4da046644469" Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.098385 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" podUID="4890e377-1482-4341-b002-bb54e05d5ded" Dec 11 10:08:19 crc kubenswrapper[4746]: I1211 10:08:19.173448 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:19 crc kubenswrapper[4746]: I1211 10:08:19.173843 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.174713 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.174823 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:23.174780746 +0000 UTC m=+876.034644139 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "webhook-server-cert" not found Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.175282 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:08:19 crc kubenswrapper[4746]: E1211 10:08:19.175346 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:23.17532642 +0000 UTC m=+876.035189813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "metrics-server-cert" not found Dec 11 10:08:22 crc kubenswrapper[4746]: I1211 10:08:22.144836 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:22 crc kubenswrapper[4746]: E1211 10:08:22.146320 4746 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 11 10:08:22 crc kubenswrapper[4746]: E1211 10:08:22.146474 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert podName:4a11cb95-3107-4526-8ab3-82bb6fd57cef nodeName:}" failed. No retries permitted until 2025-12-11 10:08:30.146417963 +0000 UTC m=+883.006281286 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert") pod "infra-operator-controller-manager-78d48bff9d-n8c44" (UID: "4a11cb95-3107-4526-8ab3-82bb6fd57cef") : secret "infra-operator-webhook-server-cert" not found Dec 11 10:08:22 crc kubenswrapper[4746]: I1211 10:08:22.758099 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:22 crc kubenswrapper[4746]: E1211 10:08:22.758671 4746 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:08:22 crc kubenswrapper[4746]: E1211 10:08:22.758851 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert podName:648adc18-f046-4dcf-9a52-c69946ffa83a nodeName:}" failed. No retries permitted until 2025-12-11 10:08:30.758815086 +0000 UTC m=+883.618678399 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert") pod "openstack-baremetal-operator-controller-manager-84b575879f8tnmf" (UID: "648adc18-f046-4dcf-9a52-c69946ffa83a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 11 10:08:23 crc kubenswrapper[4746]: I1211 10:08:23.267495 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:23 crc kubenswrapper[4746]: E1211 10:08:23.267703 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:08:23 crc kubenswrapper[4746]: E1211 10:08:23.268108 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:31.268082648 +0000 UTC m=+884.127946011 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "webhook-server-cert" not found Dec 11 10:08:23 crc kubenswrapper[4746]: I1211 10:08:23.268148 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:23 crc kubenswrapper[4746]: E1211 10:08:23.268343 4746 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 11 10:08:23 crc kubenswrapper[4746]: E1211 10:08:23.268404 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:31.268385007 +0000 UTC m=+884.128248350 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "metrics-server-cert" not found Dec 11 10:08:30 crc kubenswrapper[4746]: I1211 10:08:30.150073 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:30 crc kubenswrapper[4746]: I1211 10:08:30.179915 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4a11cb95-3107-4526-8ab3-82bb6fd57cef-cert\") pod \"infra-operator-controller-manager-78d48bff9d-n8c44\" (UID: \"4a11cb95-3107-4526-8ab3-82bb6fd57cef\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:30 crc kubenswrapper[4746]: I1211 10:08:30.365438 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:08:30 crc kubenswrapper[4746]: I1211 10:08:30.762076 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:30 crc kubenswrapper[4746]: I1211 10:08:30.766526 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/648adc18-f046-4dcf-9a52-c69946ffa83a-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879f8tnmf\" (UID: \"648adc18-f046-4dcf-9a52-c69946ffa83a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:31 crc kubenswrapper[4746]: I1211 10:08:31.008712 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:08:31 crc kubenswrapper[4746]: I1211 10:08:31.301149 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:31 crc kubenswrapper[4746]: I1211 10:08:31.301273 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:31 crc kubenswrapper[4746]: E1211 10:08:31.301357 4746 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 11 10:08:31 crc kubenswrapper[4746]: E1211 10:08:31.301459 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs podName:5d1a162f-09fe-4a7a-854e-3236282b3189 nodeName:}" failed. No retries permitted until 2025-12-11 10:08:47.301431551 +0000 UTC m=+900.161294884 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs") pod "openstack-operator-controller-manager-686fb77d86-hmhr5" (UID: "5d1a162f-09fe-4a7a-854e-3236282b3189") : secret "webhook-server-cert" not found Dec 11 10:08:31 crc kubenswrapper[4746]: I1211 10:08:31.308993 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-metrics-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:33 crc kubenswrapper[4746]: E1211 10:08:33.996451 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 11 10:08:34 crc kubenswrapper[4746]: E1211 10:08:33.996846 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxsd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-7ftgl_openstack-operators(fecc9092-bba1-4488-af41-3d970dba0968): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:08:35 crc kubenswrapper[4746]: E1211 10:08:35.804707 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168" Dec 11 10:08:35 crc kubenswrapper[4746]: E1211 10:08:35.805266 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:d9a3694865a7d54ee96397add18c3898886e98d079aa20876a0f4de1fa7a7168,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kdbbq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-998648c74-g9qlp_openstack-operators(9ea7dd8b-4871-43c0-a66f-113742627a6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:08:45 crc kubenswrapper[4746]: E1211 10:08:45.638227 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 11 10:08:45 crc kubenswrapper[4746]: E1211 10:08:45.639294 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fhtpf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-7dcw7_openstack-operators(3b4184f0-3e35-4e70-9adc-87a4681c343c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:08:46 crc kubenswrapper[4746]: E1211 10:08:46.368942 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 11 10:08:46 crc kubenswrapper[4746]: E1211 10:08:46.369456 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9mfw6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-kklt5_openstack-operators(798a9e32-0bc8-4231-834a-fc2b002c87aa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:08:47 crc kubenswrapper[4746]: E1211 10:08:47.296556 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 11 10:08:47 crc kubenswrapper[4746]: E1211 10:08:47.296775 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x2kkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-b797r_openstack-operators(3b8201ce-fb41-4474-9609-689fe0d093ec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:08:47 crc kubenswrapper[4746]: I1211 10:08:47.396701 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:47 crc kubenswrapper[4746]: I1211 10:08:47.404613 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/5d1a162f-09fe-4a7a-854e-3236282b3189-webhook-certs\") pod \"openstack-operator-controller-manager-686fb77d86-hmhr5\" (UID: \"5d1a162f-09fe-4a7a-854e-3236282b3189\") " pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:47 crc kubenswrapper[4746]: I1211 10:08:47.474355 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:08:48 crc kubenswrapper[4746]: E1211 10:08:48.076802 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a" Dec 11 10:08:48 crc kubenswrapper[4746]: E1211 10:08:48.077287 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:961417d59f527d925ac48ff6a11de747d0493315e496e34dc83d76a1a1fff58a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kxrt5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-75944c9b7-sg2q4_openstack-operators(5f3fcc59-b850-4041-84b3-9ccc788c73fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:08:48 crc kubenswrapper[4746]: E1211 10:08:48.917478 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 11 10:08:48 crc kubenswrapper[4746]: E1211 10:08:48.917719 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v68vs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-882l4_openstack-operators(778c0ffc-7a48-4159-8a1b-f34a805bc1ae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:08:52 crc kubenswrapper[4746]: E1211 10:08:52.335150 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 11 10:08:52 crc kubenswrapper[4746]: E1211 10:08:52.335596 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f24lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5rmqc_openshift-marketplace(50587a78-88d9-43a1-98d8-8b7941be4600): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 11 10:08:52 crc kubenswrapper[4746]: E1211 10:08:52.337192 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-5rmqc" podUID="50587a78-88d9-43a1-98d8-8b7941be4600" Dec 11 10:08:52 crc kubenswrapper[4746]: E1211 10:08:52.894863 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5rmqc" podUID="50587a78-88d9-43a1-98d8-8b7941be4600" Dec 11 10:08:52 crc kubenswrapper[4746]: E1211 10:08:52.925401 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 11 10:08:52 crc kubenswrapper[4746]: E1211 10:08:52.926259 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjcss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-5c8g4_openstack-operators(5d8442a7-c511-4f69-b04e-45e750f27bfa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:08:53 crc kubenswrapper[4746]: E1211 10:08:53.505194 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 11 10:08:53 crc kubenswrapper[4746]: E1211 10:08:53.505400 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m9fzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-zbwxc_openstack-operators(00e181e7-8b84-49f6-96c5-4da046644469): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:09:01 crc kubenswrapper[4746]: I1211 10:09:01.658590 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44"] Dec 11 10:09:01 crc kubenswrapper[4746]: I1211 10:09:01.673232 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf"] Dec 11 10:09:02 crc kubenswrapper[4746]: E1211 10:09:02.199979 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 11 10:09:02 crc kubenswrapper[4746]: E1211 10:09:02.200176 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjg4d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-rq8z4_openstack-operators(23f6b30a-57a8-4920-ab2e-dfebef4d9ce6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:09:02 crc kubenswrapper[4746]: E1211 10:09:02.201455 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" podUID="23f6b30a-57a8-4920-ab2e-dfebef4d9ce6" Dec 11 10:09:02 crc kubenswrapper[4746]: I1211 10:09:02.356129 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:09:02 crc kubenswrapper[4746]: I1211 10:09:02.602690 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" event={"ID":"648adc18-f046-4dcf-9a52-c69946ffa83a","Type":"ContainerStarted","Data":"cde081b77f789f05787ec4e6aa25d2ac8156646c626906fa6e6f5a383ce3ecd5"} Dec 11 10:09:02 crc kubenswrapper[4746]: I1211 10:09:02.604206 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" event={"ID":"4a11cb95-3107-4526-8ab3-82bb6fd57cef","Type":"ContainerStarted","Data":"f9acfb8b36af5895a0b938cc363d42ddc162e8d6551cc4860905371b8183bb2e"} Dec 11 10:09:02 crc kubenswrapper[4746]: I1211 10:09:02.823757 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5"] Dec 11 10:09:02 crc kubenswrapper[4746]: W1211 10:09:02.887403 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d1a162f_09fe_4a7a_854e_3236282b3189.slice/crio-6e780f79953cb786361f4a02ca1f0d3c50a80bc68bdfc957fe88379dc36e1b7e WatchSource:0}: Error finding container 6e780f79953cb786361f4a02ca1f0d3c50a80bc68bdfc957fe88379dc36e1b7e: Status 404 returned error can't find the container with id 6e780f79953cb786361f4a02ca1f0d3c50a80bc68bdfc957fe88379dc36e1b7e Dec 11 10:09:03 crc kubenswrapper[4746]: I1211 10:09:03.613215 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" event={"ID":"5d1a162f-09fe-4a7a-854e-3236282b3189","Type":"ContainerStarted","Data":"6e780f79953cb786361f4a02ca1f0d3c50a80bc68bdfc957fe88379dc36e1b7e"} Dec 11 10:09:03 crc kubenswrapper[4746]: I1211 10:09:03.614865 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" event={"ID":"f9f2bc47-53f4-4216-8fb2-27f2db87123e","Type":"ContainerStarted","Data":"2ffd1122fe30955712d463f07388ad3c46d9e23a8ce5ea877d228d17849f474d"} Dec 11 10:09:03 crc kubenswrapper[4746]: I1211 10:09:03.616193 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" event={"ID":"efe16578-2d6a-40a9-9f8c-9b868a6d6a66","Type":"ContainerStarted","Data":"ab91fc159883f0061f4ad4a2efcde75425af9eea93e1928f160c6b51d12313c2"} Dec 11 10:09:03 crc kubenswrapper[4746]: I1211 10:09:03.617455 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" event={"ID":"2d353fc2-d0c0-47ed-be04-acc87fd980a7","Type":"ContainerStarted","Data":"5ce937e26004f1a32b077dc00f0a497430e857f5fcd6fa9344e74b8c2c055ec6"} Dec 11 10:09:03 crc kubenswrapper[4746]: I1211 10:09:03.619534 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" event={"ID":"01451fbe-7fd7-447c-b6ef-967f7ddff94b","Type":"ContainerStarted","Data":"3d1e0a059a1221cdc422cf749d9edd19d67998e5a573aae8915c62278e099675"} Dec 11 10:09:03 crc kubenswrapper[4746]: I1211 10:09:03.620988 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" event={"ID":"b47efaee-6921-4f8b-876a-3cf52bd10a27","Type":"ContainerStarted","Data":"430669a0e74dd73aa7a3d886b754bfe1a50717acfb6a30378270a15c405b98c4"} Dec 11 10:09:04 crc kubenswrapper[4746]: I1211 10:09:04.688436 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" event={"ID":"b0e6f3b3-a8b7-4bca-8e55-118bd35a9635","Type":"ContainerStarted","Data":"0a8cc5b3d4126c77fa5954b1406f08c23259872d98c18d1f86b5fe78716f81eb"} Dec 11 10:09:06 crc kubenswrapper[4746]: I1211 10:09:06.712819 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" event={"ID":"4c437995-b526-4ae3-9956-b541694d54d4","Type":"ContainerStarted","Data":"27c0eb1de516e1d6abc9092f88561c5b702ded685723d98935093832a25088fb"} Dec 11 10:09:10 crc kubenswrapper[4746]: E1211 10:09:10.413412 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Dec 11 10:09:10 crc kubenswrapper[4746]: E1211 10:09:10.414112 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qxsd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-7ftgl_openstack-operators(fecc9092-bba1-4488-af41-3d970dba0968): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:09:10 crc kubenswrapper[4746]: E1211 10:09:10.415396 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" podUID="fecc9092-bba1-4488-af41-3d970dba0968" Dec 11 10:09:10 crc kubenswrapper[4746]: I1211 10:09:10.761363 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" event={"ID":"9b6740eb-7439-465a-b30a-c838a4d65be6","Type":"ContainerStarted","Data":"0033d5a2f2af6301471e949ed60b0417fca76ff6d702ff204a1f9c202a3a1a6d"} Dec 11 10:09:10 crc kubenswrapper[4746]: I1211 10:09:10.763792 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" event={"ID":"5d1a162f-09fe-4a7a-854e-3236282b3189","Type":"ContainerStarted","Data":"d5681be52ab5f87cd03e83648f914f407ab372ed169217f9e25deae017001a83"} Dec 11 10:09:10 crc kubenswrapper[4746]: I1211 10:09:10.765267 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:09:10 crc kubenswrapper[4746]: I1211 10:09:10.843073 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" event={"ID":"252b923b-a265-46c1-8c3e-9ef62d5b1f7a","Type":"ContainerStarted","Data":"6b93fd9b8ab37132b843b473188ff7bcdb0175d6115f0057002b7b2e27d830fb"} Dec 11 10:09:10 crc kubenswrapper[4746]: I1211 10:09:10.885437 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" podStartSLOduration=55.885410071 podStartE2EDuration="55.885410071s" podCreationTimestamp="2025-12-11 10:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:09:10.867144787 +0000 UTC m=+923.727008110" watchObservedRunningTime="2025-12-11 10:09:10.885410071 +0000 UTC m=+923.745273384" Dec 11 10:09:13 crc kubenswrapper[4746]: I1211 10:09:13.863744 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" event={"ID":"4890e377-1482-4341-b002-bb54e05d5ded","Type":"ContainerStarted","Data":"a9c33122606e261a9c67ced037befc987b5552b20e2f0075b5b0788f1a341ab3"} Dec 11 10:09:14 crc kubenswrapper[4746]: E1211 10:09:14.827678 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" podUID="5f3fcc59-b850-4041-84b3-9ccc788c73fc" Dec 11 10:09:14 crc kubenswrapper[4746]: E1211 10:09:14.836459 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" podUID="778c0ffc-7a48-4159-8a1b-f34a805bc1ae" Dec 11 10:09:14 crc kubenswrapper[4746]: I1211 10:09:14.871285 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" event={"ID":"778c0ffc-7a48-4159-8a1b-f34a805bc1ae","Type":"ContainerStarted","Data":"c6aca245ee5e4167da7837753127f6fced07b0bc874a48be56d0628a87625c7c"} Dec 11 10:09:14 crc kubenswrapper[4746]: I1211 10:09:14.873830 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" event={"ID":"3b4184f0-3e35-4e70-9adc-87a4681c343c","Type":"ContainerStarted","Data":"6958fc262927e05754f4fe35e08d5dc5891e4696807759722d8e8502b4bef143"} Dec 11 10:09:14 crc kubenswrapper[4746]: I1211 10:09:14.874993 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" event={"ID":"648adc18-f046-4dcf-9a52-c69946ffa83a","Type":"ContainerStarted","Data":"87b7bae9aee84c86b1902295779f3a56f11a60415ebc620161cb32d86ba66ff5"} Dec 11 10:09:14 crc kubenswrapper[4746]: I1211 10:09:14.876170 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" event={"ID":"4c437995-b526-4ae3-9956-b541694d54d4","Type":"ContainerStarted","Data":"ad4663f229fe20a8d7346206f47b32fed35ca07cb0aa042674d310135d5881a5"} Dec 11 10:09:14 crc kubenswrapper[4746]: I1211 10:09:14.877146 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" Dec 11 10:09:14 crc kubenswrapper[4746]: I1211 10:09:14.879311 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" event={"ID":"fecc9092-bba1-4488-af41-3d970dba0968","Type":"ContainerStarted","Data":"c97f5e91b95a1b78276a1347c040b107a3743c0d31c9de26cbe207c6788aa82f"} Dec 11 10:09:14 crc kubenswrapper[4746]: I1211 10:09:14.880495 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" event={"ID":"4a11cb95-3107-4526-8ab3-82bb6fd57cef","Type":"ContainerStarted","Data":"66709813cb0fe72571d0cc7da8b7356b1ee43aa54cc148e7e322c4ca80107960"} Dec 11 10:09:14 crc kubenswrapper[4746]: I1211 10:09:14.885344 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" Dec 11 10:09:14 crc kubenswrapper[4746]: E1211 10:09:14.887222 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" podUID="3b4184f0-3e35-4e70-9adc-87a4681c343c" Dec 11 10:09:14 crc kubenswrapper[4746]: I1211 10:09:14.887859 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmqc" event={"ID":"50587a78-88d9-43a1-98d8-8b7941be4600","Type":"ContainerStarted","Data":"39b1a2abcf91b3bbd6e3c78bf3e0675b8222ec4dfd1e4487b2761a12686e7952"} Dec 11 10:09:14 crc kubenswrapper[4746]: I1211 10:09:14.899653 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" event={"ID":"5f3fcc59-b850-4041-84b3-9ccc788c73fc","Type":"ContainerStarted","Data":"a8be047cfdf7f9bcd7390433655fd960a6d7a8af6f8e6acda9e3255f75a229bc"} Dec 11 10:09:15 crc kubenswrapper[4746]: I1211 10:09:15.019316 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-wd9vj" podStartSLOduration=4.465596626 podStartE2EDuration="1m1.019292953s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.269208699 +0000 UTC m=+870.129072012" lastFinishedPulling="2025-12-11 10:09:13.822905026 +0000 UTC m=+926.682768339" observedRunningTime="2025-12-11 10:09:14.969341891 +0000 UTC m=+927.829205214" watchObservedRunningTime="2025-12-11 10:09:15.019292953 +0000 UTC m=+927.879156266" Dec 11 10:09:15 crc kubenswrapper[4746]: E1211 10:09:15.126519 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" podUID="3b8201ce-fb41-4474-9609-689fe0d093ec" Dec 11 10:09:15 crc kubenswrapper[4746]: E1211 10:09:15.271307 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" podUID="9ea7dd8b-4871-43c0-a66f-113742627a6b" Dec 11 10:09:15 crc kubenswrapper[4746]: E1211 10:09:15.311088 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" podUID="5d8442a7-c511-4f69-b04e-45e750f27bfa" Dec 11 10:09:15 crc kubenswrapper[4746]: E1211 10:09:15.316111 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" podUID="00e181e7-8b84-49f6-96c5-4da046644469" Dec 11 10:09:15 crc kubenswrapper[4746]: E1211 10:09:15.704536 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" podUID="23f6b30a-57a8-4920-ab2e-dfebef4d9ce6" Dec 11 10:09:15 crc kubenswrapper[4746]: E1211 10:09:15.812943 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" podUID="798a9e32-0bc8-4231-834a-fc2b002c87aa" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.060475 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" event={"ID":"9ea7dd8b-4871-43c0-a66f-113742627a6b","Type":"ContainerStarted","Data":"80c9dae5c849c8422fc813e6b717c4c25b59c83d80db78fa128f710d6efda7cf"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.081696 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" event={"ID":"798a9e32-0bc8-4231-834a-fc2b002c87aa","Type":"ContainerStarted","Data":"b5c5d50071f672bca1c6a35889378320c9d719384052c8151dc809b55faca0aa"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.104090 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" event={"ID":"2d353fc2-d0c0-47ed-be04-acc87fd980a7","Type":"ContainerStarted","Data":"70a7660760954fa2f177b3bb3d65d28c67da04f23d50215fafa8ae0af993e8ea"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.105214 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.123620 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.139570 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" event={"ID":"3b8201ce-fb41-4474-9609-689fe0d093ec","Type":"ContainerStarted","Data":"1fec0b28fd134a1338f6dd8178583bb1be67056d343b20013b0cd5ec66af847f"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.180195 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" event={"ID":"b0e6f3b3-a8b7-4bca-8e55-118bd35a9635","Type":"ContainerStarted","Data":"09a52b7228240097c0f687ddc3c32069312467dda374cd6c9a18e215ba7a7a5e"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.181398 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.187285 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.209005 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" event={"ID":"252b923b-a265-46c1-8c3e-9ef62d5b1f7a","Type":"ContainerStarted","Data":"3db0e3e7214ca61ef3d2e8dcca2daf9b467ad8bdc2a61109c708af460c169632"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.210411 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.215623 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.225999 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" event={"ID":"efe16578-2d6a-40a9-9f8c-9b868a6d6a66","Type":"ContainerStarted","Data":"849bfb86b4b92640dcc58b101965296f538f630cd90ea35d1b9127102db8fedf"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.227156 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.236970 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.254220 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-dntwk" podStartSLOduration=6.626112521 podStartE2EDuration="1m3.25418213s" podCreationTimestamp="2025-12-11 10:08:13 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.243311228 +0000 UTC m=+870.103174541" lastFinishedPulling="2025-12-11 10:09:13.871380837 +0000 UTC m=+926.731244150" observedRunningTime="2025-12-11 10:09:16.180847816 +0000 UTC m=+929.040711129" watchObservedRunningTime="2025-12-11 10:09:16.25418213 +0000 UTC m=+929.114045443" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.273121 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-glgvn" podStartSLOduration=5.742432032 podStartE2EDuration="1m3.273104442s" podCreationTimestamp="2025-12-11 10:08:13 +0000 UTC" firstStartedPulling="2025-12-11 10:08:16.450520497 +0000 UTC m=+869.310383810" lastFinishedPulling="2025-12-11 10:09:13.981192897 +0000 UTC m=+926.841056220" observedRunningTime="2025-12-11 10:09:16.271897279 +0000 UTC m=+929.131760592" watchObservedRunningTime="2025-12-11 10:09:16.273104442 +0000 UTC m=+929.132967755" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.277636 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" event={"ID":"01451fbe-7fd7-447c-b6ef-967f7ddff94b","Type":"ContainerStarted","Data":"740d3a2a1ba74e179ac5ebe168939e7ee5c71ad1903af97cb646986e5204fae1"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.280078 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.291402 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.307260 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" event={"ID":"00e181e7-8b84-49f6-96c5-4da046644469","Type":"ContainerStarted","Data":"959786eae0532cd5983405c673e0d06cacf22e620dd34976bce970203bb503e0"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.334111 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" event={"ID":"648adc18-f046-4dcf-9a52-c69946ffa83a","Type":"ContainerStarted","Data":"b3770742604e56ed6a12ab271191bb3e10caa2177357ac140c5eaa14cea1464b"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.334751 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.336738 4746 generic.go:334] "Generic (PLEG): container finished" podID="50587a78-88d9-43a1-98d8-8b7941be4600" containerID="39b1a2abcf91b3bbd6e3c78bf3e0675b8222ec4dfd1e4487b2761a12686e7952" exitCode=0 Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.336778 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmqc" event={"ID":"50587a78-88d9-43a1-98d8-8b7941be4600","Type":"ContainerDied","Data":"39b1a2abcf91b3bbd6e3c78bf3e0675b8222ec4dfd1e4487b2761a12686e7952"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.400988 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" event={"ID":"5d8442a7-c511-4f69-b04e-45e750f27bfa","Type":"ContainerStarted","Data":"fbba9ac3a7976415a547b3ebcea13e1c225dedbcf9a57696b677e70e793fe70c"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.450173 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" event={"ID":"f9f2bc47-53f4-4216-8fb2-27f2db87123e","Type":"ContainerStarted","Data":"79cd1852067ba9469bb88161fc0a38239cdb06abd63800f2ff7bd76779608a1d"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.452832 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.455270 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" event={"ID":"9b6740eb-7439-465a-b30a-c838a4d65be6","Type":"ContainerStarted","Data":"5078f6200bcec35f538f055c835343de3733bc4b98a482e6343613f7e50fafd8"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.458666 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.463019 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" event={"ID":"b47efaee-6921-4f8b-876a-3cf52bd10a27","Type":"ContainerStarted","Data":"e6b1e25a1a500eb162cf8718acd4921de1c92c5a3a1e99e5f6888033e28f9194"} Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.463068 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.481745 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.500360 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.500464 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.508668 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-8pn4t" podStartSLOduration=5.677927483 podStartE2EDuration="1m2.508657992s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.125991736 +0000 UTC m=+869.985855049" lastFinishedPulling="2025-12-11 10:09:13.956722235 +0000 UTC m=+926.816585558" observedRunningTime="2025-12-11 10:09:16.507856781 +0000 UTC m=+929.367720094" watchObservedRunningTime="2025-12-11 10:09:16.508657992 +0000 UTC m=+929.368521305" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.547232 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-vr9wq" podStartSLOduration=5.72437221 podStartE2EDuration="1m2.547215165s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.092737927 +0000 UTC m=+869.952601240" lastFinishedPulling="2025-12-11 10:09:13.915580882 +0000 UTC m=+926.775444195" observedRunningTime="2025-12-11 10:09:16.543899586 +0000 UTC m=+929.403762909" watchObservedRunningTime="2025-12-11 10:09:16.547215165 +0000 UTC m=+929.407078478" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.631915 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-bkvrk" podStartSLOduration=6.73994058 podStartE2EDuration="1m3.631897175s" podCreationTimestamp="2025-12-11 10:08:13 +0000 UTC" firstStartedPulling="2025-12-11 10:08:16.977927322 +0000 UTC m=+869.837790635" lastFinishedPulling="2025-12-11 10:09:13.869883917 +0000 UTC m=+926.729747230" observedRunningTime="2025-12-11 10:09:16.627449205 +0000 UTC m=+929.487312518" watchObservedRunningTime="2025-12-11 10:09:16.631897175 +0000 UTC m=+929.491760498" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.800736 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-zstsf" podStartSLOduration=5.879208347 podStartE2EDuration="1m2.800707061s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:16.941674131 +0000 UTC m=+869.801537444" lastFinishedPulling="2025-12-11 10:09:13.863172845 +0000 UTC m=+926.723036158" observedRunningTime="2025-12-11 10:09:16.79879457 +0000 UTC m=+929.658657893" watchObservedRunningTime="2025-12-11 10:09:16.800707061 +0000 UTC m=+929.660570384" Dec 11 10:09:16 crc kubenswrapper[4746]: I1211 10:09:16.989601 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" podStartSLOduration=51.637043109 podStartE2EDuration="1m2.989577399s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:09:02.355868959 +0000 UTC m=+915.215732272" lastFinishedPulling="2025-12-11 10:09:13.708403219 +0000 UTC m=+926.568266562" observedRunningTime="2025-12-11 10:09:16.932467204 +0000 UTC m=+929.792330517" watchObservedRunningTime="2025-12-11 10:09:16.989577399 +0000 UTC m=+929.849440712" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.032490 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-5gjg9" podStartSLOduration=7.276983665 podStartE2EDuration="1m4.032474679s" podCreationTimestamp="2025-12-11 10:08:13 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.105404559 +0000 UTC m=+869.965267872" lastFinishedPulling="2025-12-11 10:09:13.860895573 +0000 UTC m=+926.720758886" observedRunningTime="2025-12-11 10:09:16.996925687 +0000 UTC m=+929.856789020" watchObservedRunningTime="2025-12-11 10:09:17.032474679 +0000 UTC m=+929.892337992" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.037304 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-x5ghz" podStartSLOduration=6.285194767 podStartE2EDuration="1m3.037286569s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.118722679 +0000 UTC m=+869.978585992" lastFinishedPulling="2025-12-11 10:09:13.870814471 +0000 UTC m=+926.730677794" observedRunningTime="2025-12-11 10:09:17.031315948 +0000 UTC m=+929.891179261" watchObservedRunningTime="2025-12-11 10:09:17.037286569 +0000 UTC m=+929.897149882" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.478539 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" event={"ID":"4a11cb95-3107-4526-8ab3-82bb6fd57cef","Type":"ContainerStarted","Data":"0cf172064617f622a7d8496b5433d1ad251b5434769b9083951832fddc322d97"} Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.478730 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.487717 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-686fb77d86-hmhr5" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.493449 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" event={"ID":"5f3fcc59-b850-4041-84b3-9ccc788c73fc","Type":"ContainerStarted","Data":"e0ec7d225fcf539e97cdd9074b346bdf86264503bae0db2416393a6964f09433"} Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.494112 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.504332 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" event={"ID":"778c0ffc-7a48-4159-8a1b-f34a805bc1ae","Type":"ContainerStarted","Data":"9370453310fa1084ef6441cacf72cf3406e48e685bfbcc8c99417c14c187c505"} Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.504914 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.524176 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" event={"ID":"4890e377-1482-4341-b002-bb54e05d5ded","Type":"ContainerStarted","Data":"af6308cb0900c4d32cdb9fcfbeddabdfc61a3d0736ea700127e487681a7649ee"} Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.524832 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.543765 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" podStartSLOduration=4.583156126 podStartE2EDuration="1m3.543730606s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.270443093 +0000 UTC m=+870.130306406" lastFinishedPulling="2025-12-11 10:09:16.231017573 +0000 UTC m=+929.090880886" observedRunningTime="2025-12-11 10:09:17.541580568 +0000 UTC m=+930.401443881" watchObservedRunningTime="2025-12-11 10:09:17.543730606 +0000 UTC m=+930.403593919" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.545306 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" podStartSLOduration=52.244939761 podStartE2EDuration="1m3.545300779s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:09:02.364705099 +0000 UTC m=+915.224568412" lastFinishedPulling="2025-12-11 10:09:13.665066107 +0000 UTC m=+926.524929430" observedRunningTime="2025-12-11 10:09:17.510350804 +0000 UTC m=+930.370214117" watchObservedRunningTime="2025-12-11 10:09:17.545300779 +0000 UTC m=+930.405164092" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.549254 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" event={"ID":"798a9e32-0bc8-4231-834a-fc2b002c87aa","Type":"ContainerStarted","Data":"b1a9509c371fe2dc093acb87ec9a5fe953690b7326e68949617f9098b33d456e"} Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.549971 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.567365 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" event={"ID":"fecc9092-bba1-4488-af41-3d970dba0968","Type":"ContainerStarted","Data":"e32a4e1eb9456c8e96d190e69c40f331f30ec3a2a141ac21562ffa7c7054757b"} Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.578875 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" podStartSLOduration=5.125604395 podStartE2EDuration="1m3.578858236s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.252757764 +0000 UTC m=+870.112621077" lastFinishedPulling="2025-12-11 10:09:15.706011605 +0000 UTC m=+928.565874918" observedRunningTime="2025-12-11 10:09:17.57345412 +0000 UTC m=+930.433317433" watchObservedRunningTime="2025-12-11 10:09:17.578858236 +0000 UTC m=+930.438721909" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.946418 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" podStartSLOduration=4.13890253 podStartE2EDuration="1m3.946398656s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.112818149 +0000 UTC m=+869.972681462" lastFinishedPulling="2025-12-11 10:09:16.920314275 +0000 UTC m=+929.780177588" observedRunningTime="2025-12-11 10:09:17.902184981 +0000 UTC m=+930.762048314" watchObservedRunningTime="2025-12-11 10:09:17.946398656 +0000 UTC m=+930.806261969" Dec 11 10:09:17 crc kubenswrapper[4746]: I1211 10:09:17.998731 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" podStartSLOduration=18.003009997 podStartE2EDuration="1m3.998715492s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.118968756 +0000 UTC m=+869.978832069" lastFinishedPulling="2025-12-11 10:09:03.114674251 +0000 UTC m=+915.974537564" observedRunningTime="2025-12-11 10:09:17.948326949 +0000 UTC m=+930.808190272" watchObservedRunningTime="2025-12-11 10:09:17.998715492 +0000 UTC m=+930.858578805" Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.006007 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" podStartSLOduration=8.374184629 podStartE2EDuration="1m5.005988998s" podCreationTimestamp="2025-12-11 10:08:13 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.078733848 +0000 UTC m=+869.938597161" lastFinishedPulling="2025-12-11 10:09:13.710538217 +0000 UTC m=+926.570401530" observedRunningTime="2025-12-11 10:09:17.987687573 +0000 UTC m=+930.847550906" watchObservedRunningTime="2025-12-11 10:09:18.005988998 +0000 UTC m=+930.865852311" Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.868931 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" event={"ID":"00e181e7-8b84-49f6-96c5-4da046644469","Type":"ContainerStarted","Data":"fa8f39fefc82aade3f73244cb5e2f43569e8f5e8a3c8d62fdc9cd83ded64fb11"} Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.870467 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.893654 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" event={"ID":"9ea7dd8b-4871-43c0-a66f-113742627a6b","Type":"ContainerStarted","Data":"a9ac87861b783bc0209eed62c513f7863c221191f8089690a6221eb420b4266e"} Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.893708 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.902871 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" event={"ID":"3b4184f0-3e35-4e70-9adc-87a4681c343c","Type":"ContainerStarted","Data":"45a0ea138156c580c12083d75e2f761ac5f9cb6a870a447d62df0dc215854e89"} Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.906397 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" event={"ID":"3b8201ce-fb41-4474-9609-689fe0d093ec","Type":"ContainerStarted","Data":"c6074bc5cf754c981d9457bddf5faef8f639ce2ad19db2499f823eecdda49fb1"} Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.909521 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.916468 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" event={"ID":"5d8442a7-c511-4f69-b04e-45e750f27bfa","Type":"ContainerStarted","Data":"89d90f6a0b171ccecba954120c31068b4ff642255475569873b93ed157a19480"} Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.923003 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.923117 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-kz5f2" Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.946644 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" podStartSLOduration=4.871461362 podStartE2EDuration="1m4.946626838s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.134122375 +0000 UTC m=+869.993985688" lastFinishedPulling="2025-12-11 10:09:17.209287851 +0000 UTC m=+930.069151164" observedRunningTime="2025-12-11 10:09:18.922423343 +0000 UTC m=+931.782286666" watchObservedRunningTime="2025-12-11 10:09:18.946626838 +0000 UTC m=+931.806490151" Dec 11 10:09:18 crc kubenswrapper[4746]: I1211 10:09:18.968011 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" podStartSLOduration=4.271936969 podStartE2EDuration="1m4.967987665s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:16.771721085 +0000 UTC m=+869.631584398" lastFinishedPulling="2025-12-11 10:09:17.467771781 +0000 UTC m=+930.327635094" observedRunningTime="2025-12-11 10:09:18.961497069 +0000 UTC m=+931.821360382" watchObservedRunningTime="2025-12-11 10:09:18.967987665 +0000 UTC m=+931.827850978" Dec 11 10:09:19 crc kubenswrapper[4746]: I1211 10:09:19.013858 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" podStartSLOduration=4.70354757 podStartE2EDuration="1m5.013831845s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:16.771340624 +0000 UTC m=+869.631203937" lastFinishedPulling="2025-12-11 10:09:17.081624899 +0000 UTC m=+929.941488212" observedRunningTime="2025-12-11 10:09:19.008588633 +0000 UTC m=+931.868451956" watchObservedRunningTime="2025-12-11 10:09:19.013831845 +0000 UTC m=+931.873695158" Dec 11 10:09:19 crc kubenswrapper[4746]: I1211 10:09:19.032103 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" podStartSLOduration=4.663104968 podStartE2EDuration="1m5.032078009s" podCreationTimestamp="2025-12-11 10:08:14 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.102288195 +0000 UTC m=+869.962151508" lastFinishedPulling="2025-12-11 10:09:17.471261236 +0000 UTC m=+930.331124549" observedRunningTime="2025-12-11 10:09:19.028024239 +0000 UTC m=+931.887887552" watchObservedRunningTime="2025-12-11 10:09:19.032078009 +0000 UTC m=+931.891941322" Dec 11 10:09:19 crc kubenswrapper[4746]: I1211 10:09:19.165756 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" podStartSLOduration=5.506564874 podStartE2EDuration="1m6.165723053s" podCreationTimestamp="2025-12-11 10:08:13 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.042482197 +0000 UTC m=+869.902345500" lastFinishedPulling="2025-12-11 10:09:17.701640366 +0000 UTC m=+930.561503679" observedRunningTime="2025-12-11 10:09:19.143335537 +0000 UTC m=+932.003198860" watchObservedRunningTime="2025-12-11 10:09:19.165723053 +0000 UTC m=+932.025586366" Dec 11 10:09:19 crc kubenswrapper[4746]: I1211 10:09:19.924264 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" Dec 11 10:09:19 crc kubenswrapper[4746]: I1211 10:09:19.933396 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-7ftgl" Dec 11 10:09:20 crc kubenswrapper[4746]: I1211 10:09:20.389217 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-n8c44" Dec 11 10:09:20 crc kubenswrapper[4746]: I1211 10:09:20.962017 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5rmqc" event={"ID":"50587a78-88d9-43a1-98d8-8b7941be4600","Type":"ContainerStarted","Data":"647da6054b2a9929e192823863e676c5c4b40a9ef459d4bf08c3acfd82d4a23e"} Dec 11 10:09:21 crc kubenswrapper[4746]: I1211 10:09:21.022698 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879f8tnmf" Dec 11 10:09:22 crc kubenswrapper[4746]: I1211 10:09:22.004026 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5rmqc" podStartSLOduration=7.652452549 podStartE2EDuration="1m9.003997644s" podCreationTimestamp="2025-12-11 10:08:13 +0000 UTC" firstStartedPulling="2025-12-11 10:08:16.795605201 +0000 UTC m=+869.655468514" lastFinishedPulling="2025-12-11 10:09:18.147150296 +0000 UTC m=+931.007013609" observedRunningTime="2025-12-11 10:09:21.996573503 +0000 UTC m=+934.856436816" watchObservedRunningTime="2025-12-11 10:09:22.003997644 +0000 UTC m=+934.863860957" Dec 11 10:09:24 crc kubenswrapper[4746]: I1211 10:09:24.087244 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:09:24 crc kubenswrapper[4746]: I1211 10:09:24.087318 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:09:24 crc kubenswrapper[4746]: I1211 10:09:24.567118 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" Dec 11 10:09:24 crc kubenswrapper[4746]: I1211 10:09:24.571325 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-7dcw7" Dec 11 10:09:25 crc kubenswrapper[4746]: I1211 10:09:25.275124 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5rmqc" podUID="50587a78-88d9-43a1-98d8-8b7941be4600" containerName="registry-server" probeResult="failure" output=< Dec 11 10:09:25 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Dec 11 10:09:25 crc kubenswrapper[4746]: > Dec 11 10:09:25 crc kubenswrapper[4746]: I1211 10:09:25.275439 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" Dec 11 10:09:25 crc kubenswrapper[4746]: I1211 10:09:25.456182 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-882l4" Dec 11 10:09:25 crc kubenswrapper[4746]: I1211 10:09:25.516995 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-g9qlp" Dec 11 10:09:25 crc kubenswrapper[4746]: I1211 10:09:25.540518 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-b797r" Dec 11 10:09:25 crc kubenswrapper[4746]: I1211 10:09:25.578697 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" Dec 11 10:09:25 crc kubenswrapper[4746]: I1211 10:09:25.643889 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-zbwxc" Dec 11 10:09:25 crc kubenswrapper[4746]: I1211 10:09:25.804310 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-75944c9b7-sg2q4" Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.134510 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.184174 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5rmqc" Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.252386 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5rmqc"] Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.254397 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" event={"ID":"23f6b30a-57a8-4920-ab2e-dfebef4d9ce6","Type":"ContainerStarted","Data":"ad42797fbd1ac1503c33e62dbd2b0ed6a636d34f9a8a682e3b060a7d1b05487e"} Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.322374 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-rq8z4" podStartSLOduration=2.784075944 podStartE2EDuration="1m19.322352104s" podCreationTimestamp="2025-12-11 10:08:15 +0000 UTC" firstStartedPulling="2025-12-11 10:08:17.126354166 +0000 UTC m=+869.986217479" lastFinishedPulling="2025-12-11 10:09:33.664630326 +0000 UTC m=+946.524493639" observedRunningTime="2025-12-11 10:09:34.319014574 +0000 UTC m=+947.178877897" watchObservedRunningTime="2025-12-11 10:09:34.322352104 +0000 UTC m=+947.182215417" Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.372186 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mh7kg"] Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.372825 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mh7kg" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerName="registry-server" containerID="cri-o://f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952" gracePeriod=2 Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.832230 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh7kg" Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.938096 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfv6\" (UniqueName: \"kubernetes.io/projected/cc103c2c-47fd-44d7-819d-2a75b4a198de-kube-api-access-qzfv6\") pod \"cc103c2c-47fd-44d7-819d-2a75b4a198de\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.938331 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-utilities\") pod \"cc103c2c-47fd-44d7-819d-2a75b4a198de\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.938362 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-catalog-content\") pod \"cc103c2c-47fd-44d7-819d-2a75b4a198de\" (UID: \"cc103c2c-47fd-44d7-819d-2a75b4a198de\") " Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.939814 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-utilities" (OuterVolumeSpecName: "utilities") pod "cc103c2c-47fd-44d7-819d-2a75b4a198de" (UID: "cc103c2c-47fd-44d7-819d-2a75b4a198de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.949243 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc103c2c-47fd-44d7-819d-2a75b4a198de-kube-api-access-qzfv6" (OuterVolumeSpecName: "kube-api-access-qzfv6") pod "cc103c2c-47fd-44d7-819d-2a75b4a198de" (UID: "cc103c2c-47fd-44d7-819d-2a75b4a198de"). InnerVolumeSpecName "kube-api-access-qzfv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.950226 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.950819 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfv6\" (UniqueName: \"kubernetes.io/projected/cc103c2c-47fd-44d7-819d-2a75b4a198de-kube-api-access-qzfv6\") on node \"crc\" DevicePath \"\"" Dec 11 10:09:34 crc kubenswrapper[4746]: I1211 10:09:34.995510 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc103c2c-47fd-44d7-819d-2a75b4a198de" (UID: "cc103c2c-47fd-44d7-819d-2a75b4a198de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.059001 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc103c2c-47fd-44d7-819d-2a75b4a198de-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.264936 4746 generic.go:334] "Generic (PLEG): container finished" podID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerID="f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952" exitCode=0 Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.265067 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mh7kg" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.265059 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh7kg" event={"ID":"cc103c2c-47fd-44d7-819d-2a75b4a198de","Type":"ContainerDied","Data":"f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952"} Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.265169 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mh7kg" event={"ID":"cc103c2c-47fd-44d7-819d-2a75b4a198de","Type":"ContainerDied","Data":"c18df20c0d0bf65e0cdee0c8e0721119531f59c2eacb7d475b53859646faef84"} Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.265213 4746 scope.go:117] "RemoveContainer" containerID="f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.296859 4746 scope.go:117] "RemoveContainer" containerID="0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.320854 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mh7kg"] Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.327987 4746 scope.go:117] "RemoveContainer" containerID="a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.334247 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mh7kg"] Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.351937 4746 scope.go:117] "RemoveContainer" containerID="f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952" Dec 11 10:09:35 crc kubenswrapper[4746]: E1211 10:09:35.352392 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952\": container with ID starting with f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952 not found: ID does not exist" containerID="f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.352444 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952"} err="failed to get container status \"f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952\": rpc error: code = NotFound desc = could not find container \"f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952\": container with ID starting with f4da99c181a0312257e2b61edede0f671b38269e8745939f4dc6077c18214952 not found: ID does not exist" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.352472 4746 scope.go:117] "RemoveContainer" containerID="0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7" Dec 11 10:09:35 crc kubenswrapper[4746]: E1211 10:09:35.352787 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7\": container with ID starting with 0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7 not found: ID does not exist" containerID="0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.352833 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7"} err="failed to get container status \"0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7\": rpc error: code = NotFound desc = could not find container \"0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7\": container with ID starting with 0f6dc65f00508b697462046607523d406fdcbb03767545fb0d21cce3b0d9f6c7 not found: ID does not exist" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.352855 4746 scope.go:117] "RemoveContainer" containerID="a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d" Dec 11 10:09:35 crc kubenswrapper[4746]: E1211 10:09:35.353379 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d\": container with ID starting with a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d not found: ID does not exist" containerID="a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.353461 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d"} err="failed to get container status \"a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d\": rpc error: code = NotFound desc = could not find container \"a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d\": container with ID starting with a8c03f06c90cf8e015d0cf3f9c950e34ef2c8e9a8557973f8eeb123aba17a80d not found: ID does not exist" Dec 11 10:09:35 crc kubenswrapper[4746]: I1211 10:09:35.638931 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" path="/var/lib/kubelet/pods/cc103c2c-47fd-44d7-819d-2a75b4a198de/volumes" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.946642 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7xrhq"] Dec 11 10:09:48 crc kubenswrapper[4746]: E1211 10:09:48.947586 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerName="extract-utilities" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.947601 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerName="extract-utilities" Dec 11 10:09:48 crc kubenswrapper[4746]: E1211 10:09:48.947627 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerName="extract-content" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.947634 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerName="extract-content" Dec 11 10:09:48 crc kubenswrapper[4746]: E1211 10:09:48.947642 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerName="registry-server" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.947649 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerName="registry-server" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.947828 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc103c2c-47fd-44d7-819d-2a75b4a198de" containerName="registry-server" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.948742 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.954781 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zvcw5" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.955077 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.955174 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.956278 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 11 10:09:48 crc kubenswrapper[4746]: I1211 10:09:48.976508 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7xrhq"] Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.043660 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rjdf"] Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.075246 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.084649 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.100227 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rjdf"] Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.155184 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-config\") pod \"dnsmasq-dns-78dd6ddcc-2rjdf\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.155291 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1680d5f5-0717-4a15-9d49-e692b9d36ebd-config\") pod \"dnsmasq-dns-675f4bcbfc-7xrhq\" (UID: \"1680d5f5-0717-4a15-9d49-e692b9d36ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.155604 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2rjdf\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.155895 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vr86\" (UniqueName: \"kubernetes.io/projected/1680d5f5-0717-4a15-9d49-e692b9d36ebd-kube-api-access-7vr86\") pod \"dnsmasq-dns-675f4bcbfc-7xrhq\" (UID: \"1680d5f5-0717-4a15-9d49-e692b9d36ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.155955 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9fnm\" (UniqueName: \"kubernetes.io/projected/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-kube-api-access-g9fnm\") pod \"dnsmasq-dns-78dd6ddcc-2rjdf\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.257370 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vr86\" (UniqueName: \"kubernetes.io/projected/1680d5f5-0717-4a15-9d49-e692b9d36ebd-kube-api-access-7vr86\") pod \"dnsmasq-dns-675f4bcbfc-7xrhq\" (UID: \"1680d5f5-0717-4a15-9d49-e692b9d36ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.257429 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9fnm\" (UniqueName: \"kubernetes.io/projected/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-kube-api-access-g9fnm\") pod \"dnsmasq-dns-78dd6ddcc-2rjdf\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.257462 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-config\") pod \"dnsmasq-dns-78dd6ddcc-2rjdf\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.257526 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1680d5f5-0717-4a15-9d49-e692b9d36ebd-config\") pod \"dnsmasq-dns-675f4bcbfc-7xrhq\" (UID: \"1680d5f5-0717-4a15-9d49-e692b9d36ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.257582 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2rjdf\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.258556 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-2rjdf\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.259558 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-config\") pod \"dnsmasq-dns-78dd6ddcc-2rjdf\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.261718 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1680d5f5-0717-4a15-9d49-e692b9d36ebd-config\") pod \"dnsmasq-dns-675f4bcbfc-7xrhq\" (UID: \"1680d5f5-0717-4a15-9d49-e692b9d36ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.281134 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9fnm\" (UniqueName: \"kubernetes.io/projected/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-kube-api-access-g9fnm\") pod \"dnsmasq-dns-78dd6ddcc-2rjdf\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.282712 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vr86\" (UniqueName: \"kubernetes.io/projected/1680d5f5-0717-4a15-9d49-e692b9d36ebd-kube-api-access-7vr86\") pod \"dnsmasq-dns-675f4bcbfc-7xrhq\" (UID: \"1680d5f5-0717-4a15-9d49-e692b9d36ebd\") " pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.411089 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.583647 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" Dec 11 10:09:49 crc kubenswrapper[4746]: I1211 10:09:49.882537 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rjdf"] Dec 11 10:09:49 crc kubenswrapper[4746]: W1211 10:09:49.889050 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80d59e5e_7506_4206_8cfe_43bf85fd6d0d.slice/crio-8c8d1e88b881d2539c2053d7ea3ce3792d687d90d339bcc16a3b6aa22ae66c78 WatchSource:0}: Error finding container 8c8d1e88b881d2539c2053d7ea3ce3792d687d90d339bcc16a3b6aa22ae66c78: Status 404 returned error can't find the container with id 8c8d1e88b881d2539c2053d7ea3ce3792d687d90d339bcc16a3b6aa22ae66c78 Dec 11 10:09:50 crc kubenswrapper[4746]: W1211 10:09:50.079373 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1680d5f5_0717_4a15_9d49_e692b9d36ebd.slice/crio-17c9a63a5e30880d9cda09c239a425b11859efc266e952ed5b769b23ba44fa9c WatchSource:0}: Error finding container 17c9a63a5e30880d9cda09c239a425b11859efc266e952ed5b769b23ba44fa9c: Status 404 returned error can't find the container with id 17c9a63a5e30880d9cda09c239a425b11859efc266e952ed5b769b23ba44fa9c Dec 11 10:09:50 crc kubenswrapper[4746]: I1211 10:09:50.080887 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7xrhq"] Dec 11 10:09:50 crc kubenswrapper[4746]: I1211 10:09:50.399315 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" event={"ID":"1680d5f5-0717-4a15-9d49-e692b9d36ebd","Type":"ContainerStarted","Data":"17c9a63a5e30880d9cda09c239a425b11859efc266e952ed5b769b23ba44fa9c"} Dec 11 10:09:50 crc kubenswrapper[4746]: I1211 10:09:50.400780 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" event={"ID":"80d59e5e-7506-4206-8cfe-43bf85fd6d0d","Type":"ContainerStarted","Data":"8c8d1e88b881d2539c2053d7ea3ce3792d687d90d339bcc16a3b6aa22ae66c78"} Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.172399 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7xrhq"] Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.253290 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q5gf4"] Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.276670 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.315032 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q5gf4"] Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.324479 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q5gf4\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.324554 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbz6d\" (UniqueName: \"kubernetes.io/projected/fd0ef0f9-a620-4711-84d4-8b3d21232f50-kube-api-access-rbz6d\") pod \"dnsmasq-dns-666b6646f7-q5gf4\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.324599 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-config\") pod \"dnsmasq-dns-666b6646f7-q5gf4\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.425918 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q5gf4\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.425999 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbz6d\" (UniqueName: \"kubernetes.io/projected/fd0ef0f9-a620-4711-84d4-8b3d21232f50-kube-api-access-rbz6d\") pod \"dnsmasq-dns-666b6646f7-q5gf4\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.426075 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-config\") pod \"dnsmasq-dns-666b6646f7-q5gf4\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.427297 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-config\") pod \"dnsmasq-dns-666b6646f7-q5gf4\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.427654 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-dns-svc\") pod \"dnsmasq-dns-666b6646f7-q5gf4\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.455987 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbz6d\" (UniqueName: \"kubernetes.io/projected/fd0ef0f9-a620-4711-84d4-8b3d21232f50-kube-api-access-rbz6d\") pod \"dnsmasq-dns-666b6646f7-q5gf4\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.637749 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rjdf"] Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.647827 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.686579 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-shqnk"] Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.697593 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.699603 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-shqnk"] Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.847543 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvgh8\" (UniqueName: \"kubernetes.io/projected/8525b54d-365c-4827-a2b9-629a000d149b-kube-api-access-nvgh8\") pod \"dnsmasq-dns-57d769cc4f-shqnk\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.847661 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-config\") pod \"dnsmasq-dns-57d769cc4f-shqnk\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.847685 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-shqnk\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.949216 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-config\") pod \"dnsmasq-dns-57d769cc4f-shqnk\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.949922 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-shqnk\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.950036 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvgh8\" (UniqueName: \"kubernetes.io/projected/8525b54d-365c-4827-a2b9-629a000d149b-kube-api-access-nvgh8\") pod \"dnsmasq-dns-57d769cc4f-shqnk\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.952364 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-config\") pod \"dnsmasq-dns-57d769cc4f-shqnk\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.953729 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-shqnk\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:52 crc kubenswrapper[4746]: I1211 10:09:52.991918 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvgh8\" (UniqueName: \"kubernetes.io/projected/8525b54d-365c-4827-a2b9-629a000d149b-kube-api-access-nvgh8\") pod \"dnsmasq-dns-57d769cc4f-shqnk\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.048172 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.391284 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.394303 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.403773 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.403952 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.404106 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.404691 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-sqrz8" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.404758 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.404913 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.405956 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.416832 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.473760 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q5gf4"] Dec 11 10:09:53 crc kubenswrapper[4746]: W1211 10:09:53.506985 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd0ef0f9_a620_4711_84d4_8b3d21232f50.slice/crio-6ae9e457ccabdc3c27d036e8eeeaf495434e0f6cef2e5aa96277f42e6e84008f WatchSource:0}: Error finding container 6ae9e457ccabdc3c27d036e8eeeaf495434e0f6cef2e5aa96277f42e6e84008f: Status 404 returned error can't find the container with id 6ae9e457ccabdc3c27d036e8eeeaf495434e0f6cef2e5aa96277f42e6e84008f Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.586491 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.586549 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.586573 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b37a306-a93c-4cb2-9a15-888df45f0ca7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.586614 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.586638 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-config-data\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.586654 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qv9\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-kube-api-access-f4qv9\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.586679 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.586905 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.586970 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b37a306-a93c-4cb2-9a15-888df45f0ca7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.587000 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.587027 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.674952 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-shqnk"] Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689695 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689759 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689782 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b37a306-a93c-4cb2-9a15-888df45f0ca7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689823 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689852 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-config-data\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689868 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4qv9\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-kube-api-access-f4qv9\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689889 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689915 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689947 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689963 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b37a306-a93c-4cb2-9a15-888df45f0ca7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.689988 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.690311 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.694954 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.695152 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-config-data\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.696982 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.697163 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.697584 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.708972 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b37a306-a93c-4cb2-9a15-888df45f0ca7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.709011 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.709160 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b37a306-a93c-4cb2-9a15-888df45f0ca7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.709759 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.719859 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4qv9\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-kube-api-access-f4qv9\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.720319 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.740461 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.859136 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.862747 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.867076 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.867316 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.867479 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.867793 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.867971 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.868101 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bkxl5" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.868205 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.884719 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.995355 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txqqq\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-kube-api-access-txqqq\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.996215 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.996259 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.996291 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.996392 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.996464 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.996504 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.996582 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.996626 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.996718 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:53 crc kubenswrapper[4746]: I1211 10:09:53.996749 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.097772 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.097827 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.097868 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txqqq\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-kube-api-access-txqqq\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.097904 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.097925 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.097938 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.097993 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.098029 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.098069 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.098089 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.098115 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.103751 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.104096 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.104682 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.104903 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.105245 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.105909 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.106496 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.106844 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.125030 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.127391 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.142579 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txqqq\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-kube-api-access-txqqq\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.195366 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.394451 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:09:54 crc kubenswrapper[4746]: W1211 10:09:54.408739 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b37a306_a93c_4cb2_9a15_888df45f0ca7.slice/crio-bb0486dfe115c29def0f4602e0c24f7da81e35e187c78cb402e058ba91b1f529 WatchSource:0}: Error finding container bb0486dfe115c29def0f4602e0c24f7da81e35e187c78cb402e058ba91b1f529: Status 404 returned error can't find the container with id bb0486dfe115c29def0f4602e0c24f7da81e35e187c78cb402e058ba91b1f529 Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.532860 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.585528 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" event={"ID":"8525b54d-365c-4827-a2b9-629a000d149b","Type":"ContainerStarted","Data":"f8073dbfab6625432881e000ad04ab713609f41cbbc9bb86167db84c10abc0a9"} Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.590126 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6b37a306-a93c-4cb2-9a15-888df45f0ca7","Type":"ContainerStarted","Data":"bb0486dfe115c29def0f4602e0c24f7da81e35e187c78cb402e058ba91b1f529"} Dec 11 10:09:54 crc kubenswrapper[4746]: I1211 10:09:54.592174 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" event={"ID":"fd0ef0f9-a620-4711-84d4-8b3d21232f50","Type":"ContainerStarted","Data":"6ae9e457ccabdc3c27d036e8eeeaf495434e0f6cef2e5aa96277f42e6e84008f"} Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.011176 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.015346 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.029152 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.029480 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-d44n7" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.029869 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.031210 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.034388 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.035943 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.166392 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.166478 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f35f21ce-59cb-4ee0-850c-9aba4010c890-kolla-config\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.166509 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f35f21ce-59cb-4ee0-850c-9aba4010c890-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.166618 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f35f21ce-59cb-4ee0-850c-9aba4010c890-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.166662 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89n7r\" (UniqueName: \"kubernetes.io/projected/f35f21ce-59cb-4ee0-850c-9aba4010c890-kube-api-access-89n7r\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.166688 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35f21ce-59cb-4ee0-850c-9aba4010c890-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.166717 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f35f21ce-59cb-4ee0-850c-9aba4010c890-config-data-default\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.166758 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f35f21ce-59cb-4ee0-850c-9aba4010c890-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.256422 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.269281 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f35f21ce-59cb-4ee0-850c-9aba4010c890-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.269396 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89n7r\" (UniqueName: \"kubernetes.io/projected/f35f21ce-59cb-4ee0-850c-9aba4010c890-kube-api-access-89n7r\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.269431 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35f21ce-59cb-4ee0-850c-9aba4010c890-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.269475 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f35f21ce-59cb-4ee0-850c-9aba4010c890-config-data-default\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.269534 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f35f21ce-59cb-4ee0-850c-9aba4010c890-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.269614 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.269653 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f35f21ce-59cb-4ee0-850c-9aba4010c890-kolla-config\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.269685 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f35f21ce-59cb-4ee0-850c-9aba4010c890-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.270422 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f35f21ce-59cb-4ee0-850c-9aba4010c890-config-data-generated\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.273478 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f35f21ce-59cb-4ee0-850c-9aba4010c890-config-data-default\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.274500 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.276150 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f35f21ce-59cb-4ee0-850c-9aba4010c890-kolla-config\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.276284 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f35f21ce-59cb-4ee0-850c-9aba4010c890-operator-scripts\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.281803 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f35f21ce-59cb-4ee0-850c-9aba4010c890-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.300064 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35f21ce-59cb-4ee0-850c-9aba4010c890-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.306096 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89n7r\" (UniqueName: \"kubernetes.io/projected/f35f21ce-59cb-4ee0-850c-9aba4010c890-kube-api-access-89n7r\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.328001 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"f35f21ce-59cb-4ee0-850c-9aba4010c890\") " pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.363501 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.619483 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c896f2d4-ac49-431e-b8c5-eda758cfa7cd","Type":"ContainerStarted","Data":"07fd51cae742b38eff0481744bb9b62f34c804ed402d909b24ae097ea1061312"} Dec 11 10:09:55 crc kubenswrapper[4746]: I1211 10:09:55.935868 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 11 10:09:55 crc kubenswrapper[4746]: W1211 10:09:55.961328 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf35f21ce_59cb_4ee0_850c_9aba4010c890.slice/crio-b6fd18f50903984f1f67626f570fe7b1984e820fbd201094732e14ed120b9db8 WatchSource:0}: Error finding container b6fd18f50903984f1f67626f570fe7b1984e820fbd201094732e14ed120b9db8: Status 404 returned error can't find the container with id b6fd18f50903984f1f67626f570fe7b1984e820fbd201094732e14ed120b9db8 Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.221297 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.228604 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.236253 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-q5vmh" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.237602 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.237750 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.238327 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.246900 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.457723 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.457854 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.457888 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqsvl\" (UniqueName: \"kubernetes.io/projected/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-kube-api-access-hqsvl\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.457909 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.457954 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.457980 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.458033 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.458094 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.500881 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.504031 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.517846 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.518198 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-fc7bt" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.518369 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.527076 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.561819 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.561900 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.561987 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.562074 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.562138 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqsvl\" (UniqueName: \"kubernetes.io/projected/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-kube-api-access-hqsvl\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.562167 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.562202 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.562303 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.564565 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.564860 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.567119 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.568860 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.569831 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.604747 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqsvl\" (UniqueName: \"kubernetes.io/projected/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-kube-api-access-hqsvl\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.605038 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.624332 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.665507 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nsr5\" (UniqueName: \"kubernetes.io/projected/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-kube-api-access-7nsr5\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.665756 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-config-data\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.665796 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.665830 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.665854 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-kolla-config\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.669623 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f35f21ce-59cb-4ee0-850c-9aba4010c890","Type":"ContainerStarted","Data":"b6fd18f50903984f1f67626f570fe7b1984e820fbd201094732e14ed120b9db8"} Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.725682 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5170f24-7cb7-43d5-bacc-c8224cfabcf4-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"f5170f24-7cb7-43d5-bacc-c8224cfabcf4\") " pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.767901 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-config-data\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.768000 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.768037 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.768075 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-kolla-config\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.768125 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nsr5\" (UniqueName: \"kubernetes.io/projected/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-kube-api-access-7nsr5\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.769695 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-config-data\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.771054 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-kolla-config\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.783705 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-combined-ca-bundle\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.790849 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nsr5\" (UniqueName: \"kubernetes.io/projected/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-kube-api-access-7nsr5\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.818416 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/a99efa0a-c9ba-4a4e-9014-fe1efed47a8a-memcached-tls-certs\") pod \"memcached-0\" (UID: \"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a\") " pod="openstack/memcached-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.862559 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 11 10:09:56 crc kubenswrapper[4746]: I1211 10:09:56.879713 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 11 10:09:57 crc kubenswrapper[4746]: I1211 10:09:57.597038 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 11 10:09:57 crc kubenswrapper[4746]: I1211 10:09:57.724903 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 11 10:09:57 crc kubenswrapper[4746]: I1211 10:09:57.728404 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a","Type":"ContainerStarted","Data":"15ee107fe9bb2b100fc9bd46ad56dad9c88024b24fbd5edf692b09060a9d23a8"} Dec 11 10:09:57 crc kubenswrapper[4746]: W1211 10:09:57.737966 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5170f24_7cb7_43d5_bacc_c8224cfabcf4.slice/crio-c01b4c859a15d20c5f161d40d5709d53eecf8bf357146e9b49297994cbeced5e WatchSource:0}: Error finding container c01b4c859a15d20c5f161d40d5709d53eecf8bf357146e9b49297994cbeced5e: Status 404 returned error can't find the container with id c01b4c859a15d20c5f161d40d5709d53eecf8bf357146e9b49297994cbeced5e Dec 11 10:09:58 crc kubenswrapper[4746]: I1211 10:09:58.754753 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:09:58 crc kubenswrapper[4746]: I1211 10:09:58.761500 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:09:58 crc kubenswrapper[4746]: I1211 10:09:58.770232 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-zw4sp" Dec 11 10:09:58 crc kubenswrapper[4746]: I1211 10:09:58.786718 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:09:58 crc kubenswrapper[4746]: I1211 10:09:58.800228 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f5170f24-7cb7-43d5-bacc-c8224cfabcf4","Type":"ContainerStarted","Data":"c01b4c859a15d20c5f161d40d5709d53eecf8bf357146e9b49297994cbeced5e"} Dec 11 10:09:58 crc kubenswrapper[4746]: I1211 10:09:58.946251 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j77q\" (UniqueName: \"kubernetes.io/projected/6f850dc5-ff1d-4e1e-a8ac-74fac0011d66-kube-api-access-2j77q\") pod \"kube-state-metrics-0\" (UID: \"6f850dc5-ff1d-4e1e-a8ac-74fac0011d66\") " pod="openstack/kube-state-metrics-0" Dec 11 10:09:59 crc kubenswrapper[4746]: I1211 10:09:59.048323 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j77q\" (UniqueName: \"kubernetes.io/projected/6f850dc5-ff1d-4e1e-a8ac-74fac0011d66-kube-api-access-2j77q\") pod \"kube-state-metrics-0\" (UID: \"6f850dc5-ff1d-4e1e-a8ac-74fac0011d66\") " pod="openstack/kube-state-metrics-0" Dec 11 10:09:59 crc kubenswrapper[4746]: I1211 10:09:59.075540 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j77q\" (UniqueName: \"kubernetes.io/projected/6f850dc5-ff1d-4e1e-a8ac-74fac0011d66-kube-api-access-2j77q\") pod \"kube-state-metrics-0\" (UID: \"6f850dc5-ff1d-4e1e-a8ac-74fac0011d66\") " pod="openstack/kube-state-metrics-0" Dec 11 10:09:59 crc kubenswrapper[4746]: I1211 10:09:59.102757 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:10:00 crc kubenswrapper[4746]: I1211 10:10:00.200454 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:10:00 crc kubenswrapper[4746]: I1211 10:10:00.858149 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6f850dc5-ff1d-4e1e-a8ac-74fac0011d66","Type":"ContainerStarted","Data":"6d26c49465379e8ffb35ff4e54d49dbe3cef39a0901828d07494db9726e83306"} Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.310130 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-242vs"] Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.324646 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.328801 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.329319 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-ffgbt" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.329533 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.332533 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-242vs"] Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.363854 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-w5jlj"] Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.367868 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.396314 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w5jlj"] Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.453014 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31760b52-7caf-49dd-bf1e-2d2f88b000a2-var-run\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.453104 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bvlq\" (UniqueName: \"kubernetes.io/projected/31760b52-7caf-49dd-bf1e-2d2f88b000a2-kube-api-access-9bvlq\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.453157 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31760b52-7caf-49dd-bf1e-2d2f88b000a2-var-log-ovn\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.453195 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31760b52-7caf-49dd-bf1e-2d2f88b000a2-scripts\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.453582 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31760b52-7caf-49dd-bf1e-2d2f88b000a2-var-run-ovn\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.453662 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31760b52-7caf-49dd-bf1e-2d2f88b000a2-combined-ca-bundle\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.454102 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/31760b52-7caf-49dd-bf1e-2d2f88b000a2-ovn-controller-tls-certs\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559187 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31760b52-7caf-49dd-bf1e-2d2f88b000a2-var-run-ovn\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559310 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-var-log\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559343 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31760b52-7caf-49dd-bf1e-2d2f88b000a2-combined-ca-bundle\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559412 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-var-lib\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559468 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/31760b52-7caf-49dd-bf1e-2d2f88b000a2-ovn-controller-tls-certs\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559491 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-var-run\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559548 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31760b52-7caf-49dd-bf1e-2d2f88b000a2-var-run\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559593 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bvlq\" (UniqueName: \"kubernetes.io/projected/31760b52-7caf-49dd-bf1e-2d2f88b000a2-kube-api-access-9bvlq\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559633 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31760b52-7caf-49dd-bf1e-2d2f88b000a2-var-log-ovn\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559655 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31760b52-7caf-49dd-bf1e-2d2f88b000a2-scripts\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559700 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4t4t\" (UniqueName: \"kubernetes.io/projected/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-kube-api-access-r4t4t\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559728 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-etc-ovs\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.559747 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-scripts\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.560409 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/31760b52-7caf-49dd-bf1e-2d2f88b000a2-var-log-ovn\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.560483 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/31760b52-7caf-49dd-bf1e-2d2f88b000a2-var-run\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.561116 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/31760b52-7caf-49dd-bf1e-2d2f88b000a2-var-run-ovn\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.562127 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31760b52-7caf-49dd-bf1e-2d2f88b000a2-scripts\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.568677 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31760b52-7caf-49dd-bf1e-2d2f88b000a2-combined-ca-bundle\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.568748 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/31760b52-7caf-49dd-bf1e-2d2f88b000a2-ovn-controller-tls-certs\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.587613 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bvlq\" (UniqueName: \"kubernetes.io/projected/31760b52-7caf-49dd-bf1e-2d2f88b000a2-kube-api-access-9bvlq\") pod \"ovn-controller-242vs\" (UID: \"31760b52-7caf-49dd-bf1e-2d2f88b000a2\") " pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.661831 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-var-log\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.661941 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-var-lib\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.661982 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-var-run\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.662071 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4t4t\" (UniqueName: \"kubernetes.io/projected/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-kube-api-access-r4t4t\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.662094 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-etc-ovs\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.662111 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-scripts\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.662751 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-var-run\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.662973 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-var-log\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.664397 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-etc-ovs\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.664596 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-242vs" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.664753 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-var-lib\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.665997 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-scripts\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.698390 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4t4t\" (UniqueName: \"kubernetes.io/projected/fbd694c4-2e54-4535-a357-0fb7ffdcabdb-kube-api-access-r4t4t\") pod \"ovn-controller-ovs-w5jlj\" (UID: \"fbd694c4-2e54-4535-a357-0fb7ffdcabdb\") " pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:02 crc kubenswrapper[4746]: I1211 10:10:02.997173 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.231484 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.233320 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.236849 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-45nps" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.237075 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.237280 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.237559 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.239898 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.243011 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.380272 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjzmj\" (UniqueName: \"kubernetes.io/projected/44843435-5bdd-416c-af49-abc0ce7c6c03-kube-api-access-wjzmj\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.380315 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.380339 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44843435-5bdd-416c-af49-abc0ce7c6c03-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.380401 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44843435-5bdd-416c-af49-abc0ce7c6c03-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.380420 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44843435-5bdd-416c-af49-abc0ce7c6c03-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.380439 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44843435-5bdd-416c-af49-abc0ce7c6c03-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.380467 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44843435-5bdd-416c-af49-abc0ce7c6c03-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.380504 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44843435-5bdd-416c-af49-abc0ce7c6c03-config\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.482284 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjzmj\" (UniqueName: \"kubernetes.io/projected/44843435-5bdd-416c-af49-abc0ce7c6c03-kube-api-access-wjzmj\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.482352 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.482389 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44843435-5bdd-416c-af49-abc0ce7c6c03-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.482480 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44843435-5bdd-416c-af49-abc0ce7c6c03-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.482504 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44843435-5bdd-416c-af49-abc0ce7c6c03-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.482564 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44843435-5bdd-416c-af49-abc0ce7c6c03-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.482612 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44843435-5bdd-416c-af49-abc0ce7c6c03-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.482672 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44843435-5bdd-416c-af49-abc0ce7c6c03-config\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.483239 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/44843435-5bdd-416c-af49-abc0ce7c6c03-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.483522 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.484157 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44843435-5bdd-416c-af49-abc0ce7c6c03-config\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.486721 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44843435-5bdd-416c-af49-abc0ce7c6c03-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.488451 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44843435-5bdd-416c-af49-abc0ce7c6c03-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.493076 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/44843435-5bdd-416c-af49-abc0ce7c6c03-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.495220 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/44843435-5bdd-416c-af49-abc0ce7c6c03-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.505479 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjzmj\" (UniqueName: \"kubernetes.io/projected/44843435-5bdd-416c-af49-abc0ce7c6c03-kube-api-access-wjzmj\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.527845 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"44843435-5bdd-416c-af49-abc0ce7c6c03\") " pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:03 crc kubenswrapper[4746]: I1211 10:10:03.559945 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.000341 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rp4z9"] Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.003922 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.020822 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp4z9"] Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.119009 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktgbp\" (UniqueName: \"kubernetes.io/projected/87f5deb8-7a92-4b45-b818-8fa95241290b-kube-api-access-ktgbp\") pod \"certified-operators-rp4z9\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.119158 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-utilities\") pod \"certified-operators-rp4z9\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.119191 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-catalog-content\") pod \"certified-operators-rp4z9\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.220575 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-utilities\") pod \"certified-operators-rp4z9\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.220671 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-catalog-content\") pod \"certified-operators-rp4z9\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.220814 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktgbp\" (UniqueName: \"kubernetes.io/projected/87f5deb8-7a92-4b45-b818-8fa95241290b-kube-api-access-ktgbp\") pod \"certified-operators-rp4z9\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.221453 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-utilities\") pod \"certified-operators-rp4z9\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.221472 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-catalog-content\") pod \"certified-operators-rp4z9\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.247722 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktgbp\" (UniqueName: \"kubernetes.io/projected/87f5deb8-7a92-4b45-b818-8fa95241290b-kube-api-access-ktgbp\") pod \"certified-operators-rp4z9\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:05 crc kubenswrapper[4746]: I1211 10:10:05.341957 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.146435 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.151031 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.156365 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.156380 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-26596" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.156456 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.156467 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.156659 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.350932 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17050a1-f53d-4058-9b22-1d26754f13d0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.350986 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17050a1-f53d-4058-9b22-1d26754f13d0-config\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.351016 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17050a1-f53d-4058-9b22-1d26754f13d0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.351088 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.351134 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djbws\" (UniqueName: \"kubernetes.io/projected/f17050a1-f53d-4058-9b22-1d26754f13d0-kube-api-access-djbws\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.351373 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17050a1-f53d-4058-9b22-1d26754f13d0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.351450 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f17050a1-f53d-4058-9b22-1d26754f13d0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.351540 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f17050a1-f53d-4058-9b22-1d26754f13d0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.453188 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17050a1-f53d-4058-9b22-1d26754f13d0-config\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.453598 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17050a1-f53d-4058-9b22-1d26754f13d0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.453667 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.453724 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djbws\" (UniqueName: \"kubernetes.io/projected/f17050a1-f53d-4058-9b22-1d26754f13d0-kube-api-access-djbws\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.453776 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17050a1-f53d-4058-9b22-1d26754f13d0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.453800 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f17050a1-f53d-4058-9b22-1d26754f13d0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.453823 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f17050a1-f53d-4058-9b22-1d26754f13d0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.453856 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17050a1-f53d-4058-9b22-1d26754f13d0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.454889 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f17050a1-f53d-4058-9b22-1d26754f13d0-config\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.455482 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.455552 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f17050a1-f53d-4058-9b22-1d26754f13d0-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.456115 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f17050a1-f53d-4058-9b22-1d26754f13d0-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.459494 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17050a1-f53d-4058-9b22-1d26754f13d0-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.460455 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f17050a1-f53d-4058-9b22-1d26754f13d0-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.461973 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f17050a1-f53d-4058-9b22-1d26754f13d0-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.515816 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djbws\" (UniqueName: \"kubernetes.io/projected/f17050a1-f53d-4058-9b22-1d26754f13d0-kube-api-access-djbws\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.524302 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f17050a1-f53d-4058-9b22-1d26754f13d0\") " pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:06 crc kubenswrapper[4746]: I1211 10:10:06.775191 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.368622 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4lz68"] Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.372004 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.384892 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lz68"] Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.501272 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-utilities\") pod \"redhat-marketplace-4lz68\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.501626 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-catalog-content\") pod \"redhat-marketplace-4lz68\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.502298 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2gbc\" (UniqueName: \"kubernetes.io/projected/877b3463-e822-4f57-b9f2-afbcbda4a044-kube-api-access-f2gbc\") pod \"redhat-marketplace-4lz68\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.603660 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-utilities\") pod \"redhat-marketplace-4lz68\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.603811 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-catalog-content\") pod \"redhat-marketplace-4lz68\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.603877 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2gbc\" (UniqueName: \"kubernetes.io/projected/877b3463-e822-4f57-b9f2-afbcbda4a044-kube-api-access-f2gbc\") pod \"redhat-marketplace-4lz68\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.604436 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-catalog-content\") pod \"redhat-marketplace-4lz68\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.604434 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-utilities\") pod \"redhat-marketplace-4lz68\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.631681 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2gbc\" (UniqueName: \"kubernetes.io/projected/877b3463-e822-4f57-b9f2-afbcbda4a044-kube-api-access-f2gbc\") pod \"redhat-marketplace-4lz68\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:13 crc kubenswrapper[4746]: I1211 10:10:13.701796 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:18 crc kubenswrapper[4746]: E1211 10:10:18.435569 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 11 10:10:18 crc kubenswrapper[4746]: E1211 10:10:18.436238 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-txqqq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(c896f2d4-ac49-431e-b8c5-eda758cfa7cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:10:18 crc kubenswrapper[4746]: E1211 10:10:18.437416 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" Dec 11 10:10:19 crc kubenswrapper[4746]: E1211 10:10:19.068999 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" Dec 11 10:10:19 crc kubenswrapper[4746]: E1211 10:10:19.176143 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-memcached:current-podified" Dec 11 10:10:19 crc kubenswrapper[4746]: E1211 10:10:19.176348 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:nf6h589h5f4h5ffhbdh57h566hf9h5d6hd8h57dh9fh574hd5hc6h679h56ch699hf9hb9h554h5dfh5cfh74hd7h658h55dh645h58h684hbfh587q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7nsr5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(a99efa0a-c9ba-4a4e-9014-fe1efed47a8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:10:19 crc kubenswrapper[4746]: E1211 10:10:19.177991 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="a99efa0a-c9ba-4a4e-9014-fe1efed47a8a" Dec 11 10:10:20 crc kubenswrapper[4746]: E1211 10:10:20.076302 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-memcached:current-podified\\\"\"" pod="openstack/memcached-0" podUID="a99efa0a-c9ba-4a4e-9014-fe1efed47a8a" Dec 11 10:10:21 crc kubenswrapper[4746]: E1211 10:10:21.260254 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 11 10:10:21 crc kubenswrapper[4746]: E1211 10:10:21.260420 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hqsvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(f5170f24-7cb7-43d5-bacc-c8224cfabcf4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:10:21 crc kubenswrapper[4746]: E1211 10:10:21.261546 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="f5170f24-7cb7-43d5-bacc-c8224cfabcf4" Dec 11 10:10:21 crc kubenswrapper[4746]: E1211 10:10:21.449609 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 11 10:10:21 crc kubenswrapper[4746]: E1211 10:10:21.449831 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f4qv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(6b37a306-a93c-4cb2-9a15-888df45f0ca7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:10:21 crc kubenswrapper[4746]: E1211 10:10:21.451084 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" Dec 11 10:10:22 crc kubenswrapper[4746]: E1211 10:10:22.093768 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="f5170f24-7cb7-43d5-bacc-c8224cfabcf4" Dec 11 10:10:22 crc kubenswrapper[4746]: E1211 10:10:22.094302 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" Dec 11 10:10:27 crc kubenswrapper[4746]: E1211 10:10:27.017556 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 11 10:10:27 crc kubenswrapper[4746]: E1211 10:10:27.017994 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89n7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(f35f21ce-59cb-4ee0-850c-9aba4010c890): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:10:27 crc kubenswrapper[4746]: E1211 10:10:27.019287 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="f35f21ce-59cb-4ee0-850c-9aba4010c890" Dec 11 10:10:27 crc kubenswrapper[4746]: E1211 10:10:27.152273 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="f35f21ce-59cb-4ee0-850c-9aba4010c890" Dec 11 10:10:27 crc kubenswrapper[4746]: E1211 10:10:27.186160 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 11 10:10:27 crc kubenswrapper[4746]: E1211 10:10:27.186213 4746 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 11 10:10:27 crc kubenswrapper[4746]: E1211 10:10:27.186328 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2j77q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(6f850dc5-ff1d-4e1e-a8ac-74fac0011d66): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled" logger="UnhandledError" Dec 11 10:10:27 crc kubenswrapper[4746]: E1211 10:10:27.188112 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying layer: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="6f850dc5-ff1d-4e1e-a8ac-74fac0011d66" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.108610 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.108769 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-g9fnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-2rjdf_openstack(80d59e5e-7506-4206-8cfe-43bf85fd6d0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.110094 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" podUID="80d59e5e-7506-4206-8cfe-43bf85fd6d0d" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.171848 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.171986 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rbz6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-q5gf4_openstack(fd0ef0f9-a620-4711-84d4-8b3d21232f50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.173837 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" podUID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.182746 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="6f850dc5-ff1d-4e1e-a8ac-74fac0011d66" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.187481 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.188563 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvgh8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-shqnk_openstack(8525b54d-365c-4827-a2b9-629a000d149b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.190139 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" podUID="8525b54d-365c-4827-a2b9-629a000d149b" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.224845 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.224995 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7vr86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-7xrhq_openstack(1680d5f5-0717-4a15-9d49-e692b9d36ebd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:10:28 crc kubenswrapper[4746]: E1211 10:10:28.227266 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" podUID="1680d5f5-0717-4a15-9d49-e692b9d36ebd" Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.593452 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-242vs"] Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.684599 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:10:28 crc kubenswrapper[4746]: W1211 10:10:28.745067 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87f5deb8_7a92_4b45_b818_8fa95241290b.slice/crio-75313f4361a64369b8e1535a21a05273ae42fa387caa1934f4f2c3ed990897fe WatchSource:0}: Error finding container 75313f4361a64369b8e1535a21a05273ae42fa387caa1934f4f2c3ed990897fe: Status 404 returned error can't find the container with id 75313f4361a64369b8e1535a21a05273ae42fa387caa1934f4f2c3ed990897fe Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.751344 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.762618 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rp4z9"] Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.798262 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-dns-svc\") pod \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.798434 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9fnm\" (UniqueName: \"kubernetes.io/projected/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-kube-api-access-g9fnm\") pod \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.798529 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-config\") pod \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\" (UID: \"80d59e5e-7506-4206-8cfe-43bf85fd6d0d\") " Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.798963 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80d59e5e-7506-4206-8cfe-43bf85fd6d0d" (UID: "80d59e5e-7506-4206-8cfe-43bf85fd6d0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.799611 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-config" (OuterVolumeSpecName: "config") pod "80d59e5e-7506-4206-8cfe-43bf85fd6d0d" (UID: "80d59e5e-7506-4206-8cfe-43bf85fd6d0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.821370 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-kube-api-access-g9fnm" (OuterVolumeSpecName: "kube-api-access-g9fnm") pod "80d59e5e-7506-4206-8cfe-43bf85fd6d0d" (UID: "80d59e5e-7506-4206-8cfe-43bf85fd6d0d"). InnerVolumeSpecName "kube-api-access-g9fnm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.837966 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lz68"] Dec 11 10:10:28 crc kubenswrapper[4746]: W1211 10:10:28.846957 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877b3463_e822_4f57_b9f2_afbcbda4a044.slice/crio-9c4109aba2691d21ea6194c02d48c0349de02e8966f5fce918d5dbb36153c184 WatchSource:0}: Error finding container 9c4109aba2691d21ea6194c02d48c0349de02e8966f5fce918d5dbb36153c184: Status 404 returned error can't find the container with id 9c4109aba2691d21ea6194c02d48c0349de02e8966f5fce918d5dbb36153c184 Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.900566 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.901020 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9fnm\" (UniqueName: \"kubernetes.io/projected/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-kube-api-access-g9fnm\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.901059 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80d59e5e-7506-4206-8cfe-43bf85fd6d0d-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:28 crc kubenswrapper[4746]: I1211 10:10:28.936833 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.008393 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w5jlj"] Dec 11 10:10:29 crc kubenswrapper[4746]: W1211 10:10:29.009767 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbd694c4_2e54_4535_a357_0fb7ffdcabdb.slice/crio-d4ccd5a7d6caadbfcd7bcb8df95583209f7021b4556c17e8fbe722946625093d WatchSource:0}: Error finding container d4ccd5a7d6caadbfcd7bcb8df95583209f7021b4556c17e8fbe722946625093d: Status 404 returned error can't find the container with id d4ccd5a7d6caadbfcd7bcb8df95583209f7021b4556c17e8fbe722946625093d Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.189341 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w5jlj" event={"ID":"fbd694c4-2e54-4535-a357-0fb7ffdcabdb","Type":"ContainerStarted","Data":"d4ccd5a7d6caadbfcd7bcb8df95583209f7021b4556c17e8fbe722946625093d"} Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.191511 4746 generic.go:334] "Generic (PLEG): container finished" podID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerID="66cc335ad627230fce1d91ad64bdabd2b34bea6896f815744763a64f9a8829e0" exitCode=0 Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.191563 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lz68" event={"ID":"877b3463-e822-4f57-b9f2-afbcbda4a044","Type":"ContainerDied","Data":"66cc335ad627230fce1d91ad64bdabd2b34bea6896f815744763a64f9a8829e0"} Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.191581 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lz68" event={"ID":"877b3463-e822-4f57-b9f2-afbcbda4a044","Type":"ContainerStarted","Data":"9c4109aba2691d21ea6194c02d48c0349de02e8966f5fce918d5dbb36153c184"} Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.194296 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f17050a1-f53d-4058-9b22-1d26754f13d0","Type":"ContainerStarted","Data":"53186ea77c8862b59cd33e0d6adf7f42916dd5f01c853edab5400af7fe267804"} Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.197804 4746 generic.go:334] "Generic (PLEG): container finished" podID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerID="8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc" exitCode=0 Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.198239 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4z9" event={"ID":"87f5deb8-7a92-4b45-b818-8fa95241290b","Type":"ContainerDied","Data":"8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc"} Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.198260 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4z9" event={"ID":"87f5deb8-7a92-4b45-b818-8fa95241290b","Type":"ContainerStarted","Data":"75313f4361a64369b8e1535a21a05273ae42fa387caa1934f4f2c3ed990897fe"} Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.201750 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-242vs" event={"ID":"31760b52-7caf-49dd-bf1e-2d2f88b000a2","Type":"ContainerStarted","Data":"3ccd03335c7e207a9d0f3a408474496519c624de64712af711c1688ad65164c7"} Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.202974 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" event={"ID":"80d59e5e-7506-4206-8cfe-43bf85fd6d0d","Type":"ContainerDied","Data":"8c8d1e88b881d2539c2053d7ea3ce3792d687d90d339bcc16a3b6aa22ae66c78"} Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.203085 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-2rjdf" Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.216916 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"44843435-5bdd-416c-af49-abc0ce7c6c03","Type":"ContainerStarted","Data":"dcc918b6fef8d15954cd318aa946431c44f84de9808d2aec01f32e82c97180ba"} Dec 11 10:10:29 crc kubenswrapper[4746]: E1211 10:10:29.219199 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" podUID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" Dec 11 10:10:29 crc kubenswrapper[4746]: E1211 10:10:29.223908 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" podUID="8525b54d-365c-4827-a2b9-629a000d149b" Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.356727 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rjdf"] Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.363091 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-2rjdf"] Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.556537 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.645328 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80d59e5e-7506-4206-8cfe-43bf85fd6d0d" path="/var/lib/kubelet/pods/80d59e5e-7506-4206-8cfe-43bf85fd6d0d/volumes" Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.726262 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vr86\" (UniqueName: \"kubernetes.io/projected/1680d5f5-0717-4a15-9d49-e692b9d36ebd-kube-api-access-7vr86\") pod \"1680d5f5-0717-4a15-9d49-e692b9d36ebd\" (UID: \"1680d5f5-0717-4a15-9d49-e692b9d36ebd\") " Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.726346 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1680d5f5-0717-4a15-9d49-e692b9d36ebd-config\") pod \"1680d5f5-0717-4a15-9d49-e692b9d36ebd\" (UID: \"1680d5f5-0717-4a15-9d49-e692b9d36ebd\") " Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.729369 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1680d5f5-0717-4a15-9d49-e692b9d36ebd-config" (OuterVolumeSpecName: "config") pod "1680d5f5-0717-4a15-9d49-e692b9d36ebd" (UID: "1680d5f5-0717-4a15-9d49-e692b9d36ebd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.733707 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1680d5f5-0717-4a15-9d49-e692b9d36ebd-kube-api-access-7vr86" (OuterVolumeSpecName: "kube-api-access-7vr86") pod "1680d5f5-0717-4a15-9d49-e692b9d36ebd" (UID: "1680d5f5-0717-4a15-9d49-e692b9d36ebd"). InnerVolumeSpecName "kube-api-access-7vr86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.828560 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vr86\" (UniqueName: \"kubernetes.io/projected/1680d5f5-0717-4a15-9d49-e692b9d36ebd-kube-api-access-7vr86\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.828607 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1680d5f5-0717-4a15-9d49-e692b9d36ebd-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.877668 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:10:29 crc kubenswrapper[4746]: I1211 10:10:29.877723 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:10:30 crc kubenswrapper[4746]: I1211 10:10:30.230726 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" event={"ID":"1680d5f5-0717-4a15-9d49-e692b9d36ebd","Type":"ContainerDied","Data":"17c9a63a5e30880d9cda09c239a425b11859efc266e952ed5b769b23ba44fa9c"} Dec 11 10:10:30 crc kubenswrapper[4746]: I1211 10:10:30.230772 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-7xrhq" Dec 11 10:10:30 crc kubenswrapper[4746]: I1211 10:10:30.353842 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7xrhq"] Dec 11 10:10:30 crc kubenswrapper[4746]: I1211 10:10:30.360824 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-7xrhq"] Dec 11 10:10:31 crc kubenswrapper[4746]: I1211 10:10:31.251533 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lz68" event={"ID":"877b3463-e822-4f57-b9f2-afbcbda4a044","Type":"ContainerStarted","Data":"e7570b3a9b3edf605397d6f2ff4875be391223f38b441ccd4763d4206f348eca"} Dec 11 10:10:31 crc kubenswrapper[4746]: I1211 10:10:31.639187 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1680d5f5-0717-4a15-9d49-e692b9d36ebd" path="/var/lib/kubelet/pods/1680d5f5-0717-4a15-9d49-e692b9d36ebd/volumes" Dec 11 10:10:32 crc kubenswrapper[4746]: I1211 10:10:32.265219 4746 generic.go:334] "Generic (PLEG): container finished" podID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerID="e7570b3a9b3edf605397d6f2ff4875be391223f38b441ccd4763d4206f348eca" exitCode=0 Dec 11 10:10:32 crc kubenswrapper[4746]: I1211 10:10:32.265307 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lz68" event={"ID":"877b3463-e822-4f57-b9f2-afbcbda4a044","Type":"ContainerDied","Data":"e7570b3a9b3edf605397d6f2ff4875be391223f38b441ccd4763d4206f348eca"} Dec 11 10:10:39 crc kubenswrapper[4746]: I1211 10:10:39.347091 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"44843435-5bdd-416c-af49-abc0ce7c6c03","Type":"ContainerStarted","Data":"211063b6d96899de8d20b840c51dca3cbe622b72d5795ca0547a437cec911ca7"} Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.357460 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w5jlj" event={"ID":"fbd694c4-2e54-4535-a357-0fb7ffdcabdb","Type":"ContainerStarted","Data":"936b8bc666fcfb0794b9ad367989249952ad126d9dc52bb43cfd4c7f15cc2ce4"} Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.359980 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f5170f24-7cb7-43d5-bacc-c8224cfabcf4","Type":"ContainerStarted","Data":"709ec0b957b0922f092a417d288ea3485ed5c564b07abf08b585379340651634"} Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.362006 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f17050a1-f53d-4058-9b22-1d26754f13d0","Type":"ContainerStarted","Data":"db3e2706c5ca8aa85a93a39fbf82427a00f63777955d55422b94a3a0e31a41f3"} Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.366699 4746 generic.go:334] "Generic (PLEG): container finished" podID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerID="990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146" exitCode=0 Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.366774 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4z9" event={"ID":"87f5deb8-7a92-4b45-b818-8fa95241290b","Type":"ContainerDied","Data":"990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146"} Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.372067 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-242vs" event={"ID":"31760b52-7caf-49dd-bf1e-2d2f88b000a2","Type":"ContainerStarted","Data":"3b803d7fa11e7dd7d8c15ddfeee1c1ef823b078adbef02b2bdf0cb2e291d19ee"} Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.372223 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-242vs" Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.373927 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"a99efa0a-c9ba-4a4e-9014-fe1efed47a8a","Type":"ContainerStarted","Data":"bc6aeff36f02b393d37149c6dd0ce49f02ae0b235371970f468dfab9d00eee78"} Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.374231 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.402687 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-242vs" podStartSLOduration=28.261152693 podStartE2EDuration="38.402664986s" podCreationTimestamp="2025-12-11 10:10:02 +0000 UTC" firstStartedPulling="2025-12-11 10:10:28.61379838 +0000 UTC m=+1001.473661693" lastFinishedPulling="2025-12-11 10:10:38.755310663 +0000 UTC m=+1011.615173986" observedRunningTime="2025-12-11 10:10:40.394839956 +0000 UTC m=+1013.254703279" watchObservedRunningTime="2025-12-11 10:10:40.402664986 +0000 UTC m=+1013.262528299" Dec 11 10:10:40 crc kubenswrapper[4746]: I1211 10:10:40.417308 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=3.065702557 podStartE2EDuration="44.41729234s" podCreationTimestamp="2025-12-11 10:09:56 +0000 UTC" firstStartedPulling="2025-12-11 10:09:57.63884717 +0000 UTC m=+970.498710483" lastFinishedPulling="2025-12-11 10:10:38.990436943 +0000 UTC m=+1011.850300266" observedRunningTime="2025-12-11 10:10:40.414104614 +0000 UTC m=+1013.273967927" watchObservedRunningTime="2025-12-11 10:10:40.41729234 +0000 UTC m=+1013.277155653" Dec 11 10:10:41 crc kubenswrapper[4746]: I1211 10:10:41.384699 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6b37a306-a93c-4cb2-9a15-888df45f0ca7","Type":"ContainerStarted","Data":"c2000b97a92c70f413a797f74ad7af2d1ad478ddb51b4ad876910c803d0020d1"} Dec 11 10:10:41 crc kubenswrapper[4746]: I1211 10:10:41.386260 4746 generic.go:334] "Generic (PLEG): container finished" podID="fbd694c4-2e54-4535-a357-0fb7ffdcabdb" containerID="936b8bc666fcfb0794b9ad367989249952ad126d9dc52bb43cfd4c7f15cc2ce4" exitCode=0 Dec 11 10:10:41 crc kubenswrapper[4746]: I1211 10:10:41.386324 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w5jlj" event={"ID":"fbd694c4-2e54-4535-a357-0fb7ffdcabdb","Type":"ContainerDied","Data":"936b8bc666fcfb0794b9ad367989249952ad126d9dc52bb43cfd4c7f15cc2ce4"} Dec 11 10:10:41 crc kubenswrapper[4746]: I1211 10:10:41.388828 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c896f2d4-ac49-431e-b8c5-eda758cfa7cd","Type":"ContainerStarted","Data":"52784d2bd3120e89e64f8e4d1ef55a5e083b21b2ce171165f5bf21bf0df40c74"} Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.446252 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6f850dc5-ff1d-4e1e-a8ac-74fac0011d66","Type":"ContainerStarted","Data":"b2012f35a5122aa38fd256a3f8f8da68e7c2bf68069f2b62a93a5e85c00f5bdb"} Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.447387 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.449874 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"44843435-5bdd-416c-af49-abc0ce7c6c03","Type":"ContainerStarted","Data":"9a84a5356fd4dcc51024456eda3c8aafeacfda5627ea0ad9daf6dad6cb144677"} Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.453795 4746 generic.go:334] "Generic (PLEG): container finished" podID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" containerID="741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee" exitCode=0 Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.453863 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" event={"ID":"fd0ef0f9-a620-4711-84d4-8b3d21232f50","Type":"ContainerDied","Data":"741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee"} Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.465457 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.877297419 podStartE2EDuration="46.465440271s" podCreationTimestamp="2025-12-11 10:09:58 +0000 UTC" firstStartedPulling="2025-12-11 10:10:00.235817455 +0000 UTC m=+973.095680768" lastFinishedPulling="2025-12-11 10:10:43.823960307 +0000 UTC m=+1016.683823620" observedRunningTime="2025-12-11 10:10:44.464823355 +0000 UTC m=+1017.324686678" watchObservedRunningTime="2025-12-11 10:10:44.465440271 +0000 UTC m=+1017.325303584" Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.473338 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lz68" event={"ID":"877b3463-e822-4f57-b9f2-afbcbda4a044","Type":"ContainerStarted","Data":"b009ffb61d92327f18265cec8e84e17573f7719c4af6c0cd09881f481b774605"} Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.477426 4746 generic.go:334] "Generic (PLEG): container finished" podID="8525b54d-365c-4827-a2b9-629a000d149b" containerID="fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd" exitCode=0 Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.477466 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" event={"ID":"8525b54d-365c-4827-a2b9-629a000d149b","Type":"ContainerDied","Data":"fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd"} Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.480602 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4z9" event={"ID":"87f5deb8-7a92-4b45-b818-8fa95241290b","Type":"ContainerStarted","Data":"ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4"} Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.486612 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f35f21ce-59cb-4ee0-850c-9aba4010c890","Type":"ContainerStarted","Data":"9313ab0b190c80e177d86fab3632ae2696d87ce29a2101861d234a040a5ac99d"} Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.489294 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=27.478097724 podStartE2EDuration="42.489279022s" podCreationTimestamp="2025-12-11 10:10:02 +0000 UTC" firstStartedPulling="2025-12-11 10:10:28.943513763 +0000 UTC m=+1001.803377076" lastFinishedPulling="2025-12-11 10:10:43.954695061 +0000 UTC m=+1016.814558374" observedRunningTime="2025-12-11 10:10:44.488420529 +0000 UTC m=+1017.348283852" watchObservedRunningTime="2025-12-11 10:10:44.489279022 +0000 UTC m=+1017.349142325" Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.497129 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w5jlj" event={"ID":"fbd694c4-2e54-4535-a357-0fb7ffdcabdb","Type":"ContainerStarted","Data":"c4d534b09b5638c2fe203fb88d9f567109b715916b63606e812eac1848ba91f7"} Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.511583 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f17050a1-f53d-4058-9b22-1d26754f13d0","Type":"ContainerStarted","Data":"6ef7ef02037aaca996fb4d3b1fa2b5b6fe77b80803c429e8e9438b217aad60c9"} Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.551785 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rp4z9" podStartSLOduration=25.841944044999998 podStartE2EDuration="40.551761502s" podCreationTimestamp="2025-12-11 10:10:04 +0000 UTC" firstStartedPulling="2025-12-11 10:10:29.200756898 +0000 UTC m=+1002.060620211" lastFinishedPulling="2025-12-11 10:10:43.910574355 +0000 UTC m=+1016.770437668" observedRunningTime="2025-12-11 10:10:44.544055984 +0000 UTC m=+1017.403919297" watchObservedRunningTime="2025-12-11 10:10:44.551761502 +0000 UTC m=+1017.411624815" Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.569766 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4lz68" podStartSLOduration=20.814006741 podStartE2EDuration="31.569751165s" podCreationTimestamp="2025-12-11 10:10:13 +0000 UTC" firstStartedPulling="2025-12-11 10:10:29.193622477 +0000 UTC m=+1002.053485810" lastFinishedPulling="2025-12-11 10:10:39.949366931 +0000 UTC m=+1012.809230234" observedRunningTime="2025-12-11 10:10:44.566685232 +0000 UTC m=+1017.426548545" watchObservedRunningTime="2025-12-11 10:10:44.569751165 +0000 UTC m=+1017.429614478" Dec 11 10:10:44 crc kubenswrapper[4746]: I1211 10:10:44.621350 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=24.349091435 podStartE2EDuration="39.621327371s" podCreationTimestamp="2025-12-11 10:10:05 +0000 UTC" firstStartedPulling="2025-12-11 10:10:28.750192696 +0000 UTC m=+1001.610056009" lastFinishedPulling="2025-12-11 10:10:44.022428632 +0000 UTC m=+1016.882291945" observedRunningTime="2025-12-11 10:10:44.614662483 +0000 UTC m=+1017.474525866" watchObservedRunningTime="2025-12-11 10:10:44.621327371 +0000 UTC m=+1017.481190684" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.343317 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.343657 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.540861 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" event={"ID":"fd0ef0f9-a620-4711-84d4-8b3d21232f50","Type":"ContainerStarted","Data":"27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a"} Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.541098 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.543551 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w5jlj" event={"ID":"fbd694c4-2e54-4535-a357-0fb7ffdcabdb","Type":"ContainerStarted","Data":"7db6c188c0241047336da4622dba46b78df7bb3507f4c4a0ba0c1ded0310ee0e"} Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.543734 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.543765 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.547901 4746 generic.go:334] "Generic (PLEG): container finished" podID="f5170f24-7cb7-43d5-bacc-c8224cfabcf4" containerID="709ec0b957b0922f092a417d288ea3485ed5c564b07abf08b585379340651634" exitCode=0 Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.547979 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f5170f24-7cb7-43d5-bacc-c8224cfabcf4","Type":"ContainerDied","Data":"709ec0b957b0922f092a417d288ea3485ed5c564b07abf08b585379340651634"} Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.550598 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" event={"ID":"8525b54d-365c-4827-a2b9-629a000d149b","Type":"ContainerStarted","Data":"cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be"} Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.551123 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.560219 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.570176 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" podStartSLOduration=2.907596006 podStartE2EDuration="53.570152768s" podCreationTimestamp="2025-12-11 10:09:52 +0000 UTC" firstStartedPulling="2025-12-11 10:09:53.510822438 +0000 UTC m=+966.370685751" lastFinishedPulling="2025-12-11 10:10:44.1733792 +0000 UTC m=+1017.033242513" observedRunningTime="2025-12-11 10:10:45.56650299 +0000 UTC m=+1018.426366323" watchObservedRunningTime="2025-12-11 10:10:45.570152768 +0000 UTC m=+1018.430016081" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.618888 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" podStartSLOduration=3.316398304 podStartE2EDuration="53.618866567s" podCreationTimestamp="2025-12-11 10:09:52 +0000 UTC" firstStartedPulling="2025-12-11 10:09:53.700328642 +0000 UTC m=+966.560191955" lastFinishedPulling="2025-12-11 10:10:44.002796905 +0000 UTC m=+1016.862660218" observedRunningTime="2025-12-11 10:10:45.614439839 +0000 UTC m=+1018.474303162" watchObservedRunningTime="2025-12-11 10:10:45.618866567 +0000 UTC m=+1018.478729880" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.643064 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-w5jlj" podStartSLOduration=33.715865836 podStartE2EDuration="43.643005046s" podCreationTimestamp="2025-12-11 10:10:02 +0000 UTC" firstStartedPulling="2025-12-11 10:10:29.013018412 +0000 UTC m=+1001.872881725" lastFinishedPulling="2025-12-11 10:10:38.940157622 +0000 UTC m=+1011.800020935" observedRunningTime="2025-12-11 10:10:45.637805196 +0000 UTC m=+1018.497668509" watchObservedRunningTime="2025-12-11 10:10:45.643005046 +0000 UTC m=+1018.502868369" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.645865 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.776420 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:45 crc kubenswrapper[4746]: I1211 10:10:45.826428 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:46 crc kubenswrapper[4746]: I1211 10:10:46.400957 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-rp4z9" podUID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerName="registry-server" probeResult="failure" output=< Dec 11 10:10:46 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Dec 11 10:10:46 crc kubenswrapper[4746]: > Dec 11 10:10:46 crc kubenswrapper[4746]: I1211 10:10:46.565346 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:46 crc kubenswrapper[4746]: I1211 10:10:46.565387 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:46 crc kubenswrapper[4746]: I1211 10:10:46.677061 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 11 10:10:46 crc kubenswrapper[4746]: I1211 10:10:46.677129 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 11 10:10:46 crc kubenswrapper[4746]: I1211 10:10:46.882285 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.042073 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-shqnk"] Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.077822 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dcqtn"] Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.079582 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.082111 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.097951 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dcqtn"] Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.176294 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-jklpb"] Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.178369 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.184201 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jklpb"] Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.184690 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.281249 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.281335 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.281616 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-ovn-rundir\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.281825 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-config\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.281895 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-combined-ca-bundle\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.281953 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvmqq\" (UniqueName: \"kubernetes.io/projected/afbb11e6-6530-4f6c-af30-b73180c8c725-kube-api-access-fvmqq\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.282025 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-config\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.282173 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-ovs-rundir\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.282258 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2hqq\" (UniqueName: \"kubernetes.io/projected/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-kube-api-access-k2hqq\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.282304 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.309482 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q5gf4"] Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.347370 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-kw9sp"] Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.349202 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.352120 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.360016 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kw9sp"] Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.383950 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.384014 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.384093 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-ovn-rundir\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.384130 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-config\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.384164 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-combined-ca-bundle\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.384191 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvmqq\" (UniqueName: \"kubernetes.io/projected/afbb11e6-6530-4f6c-af30-b73180c8c725-kube-api-access-fvmqq\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.384222 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-config\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.384261 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-ovs-rundir\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.384291 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2hqq\" (UniqueName: \"kubernetes.io/projected/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-kube-api-access-k2hqq\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.384317 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.385394 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.385850 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-ovs-rundir\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.386029 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.386237 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-config\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.386637 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-config\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.390257 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-ovn-rundir\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.392205 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.397110 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-combined-ca-bundle\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.421813 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvmqq\" (UniqueName: \"kubernetes.io/projected/afbb11e6-6530-4f6c-af30-b73180c8c725-kube-api-access-fvmqq\") pod \"dnsmasq-dns-6bc7876d45-dcqtn\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.436488 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.437738 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2hqq\" (UniqueName: \"kubernetes.io/projected/bc3dc4dd-014a-42fe-a1e7-ee2d10866d75-kube-api-access-k2hqq\") pod \"ovn-controller-metrics-jklpb\" (UID: \"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75\") " pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.438683 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.440959 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.444474 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.446279 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9tsvg" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.446637 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.466908 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.486066 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.486126 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.486186 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-config\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.486255 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-dns-svc\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.486293 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7b7m\" (UniqueName: \"kubernetes.io/projected/95fab54f-52bc-4bff-8c9b-819cff12d9b9-kube-api-access-c7b7m\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.499258 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-jklpb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.575844 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"f5170f24-7cb7-43d5-bacc-c8224cfabcf4","Type":"ContainerStarted","Data":"d98f0511a2fae1da89ee78680e316f0f8509cf58041be07be75637d6f7ed2fb4"} Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.577394 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" podUID="8525b54d-365c-4827-a2b9-629a000d149b" containerName="dnsmasq-dns" containerID="cri-o://cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be" gracePeriod=10 Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.577641 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" podUID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" containerName="dnsmasq-dns" containerID="cri-o://27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a" gracePeriod=10 Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.588996 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589105 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-config\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589165 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589195 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-scripts\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589228 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589261 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-dns-svc\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589285 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rlg9\" (UniqueName: \"kubernetes.io/projected/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-kube-api-access-9rlg9\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589315 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-config\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589345 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7b7m\" (UniqueName: \"kubernetes.io/projected/95fab54f-52bc-4bff-8c9b-819cff12d9b9-kube-api-access-c7b7m\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589400 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589428 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.589454 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.590222 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.593696 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-config\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.594710 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-dns-svc\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.596281 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.604117 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.607663 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=11.388044401 podStartE2EDuration="52.607640769s" podCreationTimestamp="2025-12-11 10:09:55 +0000 UTC" firstStartedPulling="2025-12-11 10:09:57.74312796 +0000 UTC m=+970.602991273" lastFinishedPulling="2025-12-11 10:10:38.962724328 +0000 UTC m=+1011.822587641" observedRunningTime="2025-12-11 10:10:47.597898907 +0000 UTC m=+1020.457762230" watchObservedRunningTime="2025-12-11 10:10:47.607640769 +0000 UTC m=+1020.467504082" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.628290 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7b7m\" (UniqueName: \"kubernetes.io/projected/95fab54f-52bc-4bff-8c9b-819cff12d9b9-kube-api-access-c7b7m\") pod \"dnsmasq-dns-8554648995-kw9sp\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.666107 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.692225 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.692269 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.692644 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.692691 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-scripts\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.692727 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.692783 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rlg9\" (UniqueName: \"kubernetes.io/projected/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-kube-api-access-9rlg9\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.692804 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-config\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.695747 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.695878 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.695965 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.696212 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.700668 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.701806 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.702159 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.704525 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-scripts\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.706009 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-config\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.712787 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.716956 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rlg9\" (UniqueName: \"kubernetes.io/projected/01f4f65b-37fc-4500-a9ba-ba3a717c37bb-kube-api-access-9rlg9\") pod \"ovn-northd-0\" (UID: \"01f4f65b-37fc-4500-a9ba-ba3a717c37bb\") " pod="openstack/ovn-northd-0" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.825242 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-9tsvg" Dec 11 10:10:47 crc kubenswrapper[4746]: I1211 10:10:47.834128 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.016452 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-jklpb"] Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.051289 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.192287 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.213678 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvgh8\" (UniqueName: \"kubernetes.io/projected/8525b54d-365c-4827-a2b9-629a000d149b-kube-api-access-nvgh8\") pod \"8525b54d-365c-4827-a2b9-629a000d149b\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.213776 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-config\") pod \"8525b54d-365c-4827-a2b9-629a000d149b\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.213918 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-dns-svc\") pod \"8525b54d-365c-4827-a2b9-629a000d149b\" (UID: \"8525b54d-365c-4827-a2b9-629a000d149b\") " Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.221415 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8525b54d-365c-4827-a2b9-629a000d149b-kube-api-access-nvgh8" (OuterVolumeSpecName: "kube-api-access-nvgh8") pod "8525b54d-365c-4827-a2b9-629a000d149b" (UID: "8525b54d-365c-4827-a2b9-629a000d149b"). InnerVolumeSpecName "kube-api-access-nvgh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.249691 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-config" (OuterVolumeSpecName: "config") pod "8525b54d-365c-4827-a2b9-629a000d149b" (UID: "8525b54d-365c-4827-a2b9-629a000d149b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.254336 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8525b54d-365c-4827-a2b9-629a000d149b" (UID: "8525b54d-365c-4827-a2b9-629a000d149b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.322169 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-config\") pod \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.322258 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbz6d\" (UniqueName: \"kubernetes.io/projected/fd0ef0f9-a620-4711-84d4-8b3d21232f50-kube-api-access-rbz6d\") pod \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.322288 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-dns-svc\") pod \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\" (UID: \"fd0ef0f9-a620-4711-84d4-8b3d21232f50\") " Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.322743 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvgh8\" (UniqueName: \"kubernetes.io/projected/8525b54d-365c-4827-a2b9-629a000d149b-kube-api-access-nvgh8\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.322760 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.322768 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8525b54d-365c-4827-a2b9-629a000d149b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.324239 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kw9sp"] Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.327598 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd0ef0f9-a620-4711-84d4-8b3d21232f50-kube-api-access-rbz6d" (OuterVolumeSpecName: "kube-api-access-rbz6d") pod "fd0ef0f9-a620-4711-84d4-8b3d21232f50" (UID: "fd0ef0f9-a620-4711-84d4-8b3d21232f50"). InnerVolumeSpecName "kube-api-access-rbz6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.397202 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-config" (OuterVolumeSpecName: "config") pod "fd0ef0f9-a620-4711-84d4-8b3d21232f50" (UID: "fd0ef0f9-a620-4711-84d4-8b3d21232f50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.398941 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd0ef0f9-a620-4711-84d4-8b3d21232f50" (UID: "fd0ef0f9-a620-4711-84d4-8b3d21232f50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.423903 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.423935 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbz6d\" (UniqueName: \"kubernetes.io/projected/fd0ef0f9-a620-4711-84d4-8b3d21232f50-kube-api-access-rbz6d\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.423947 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd0ef0f9-a620-4711-84d4-8b3d21232f50-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.455300 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dcqtn"] Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.472005 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 11 10:10:48 crc kubenswrapper[4746]: W1211 10:10:48.492268 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01f4f65b_37fc_4500_a9ba_ba3a717c37bb.slice/crio-43c80aa4751de87ef85ba2b8cb1acb717adbc42b2be36282bac5e3439b8630af WatchSource:0}: Error finding container 43c80aa4751de87ef85ba2b8cb1acb717adbc42b2be36282bac5e3439b8630af: Status 404 returned error can't find the container with id 43c80aa4751de87ef85ba2b8cb1acb717adbc42b2be36282bac5e3439b8630af Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.598428 4746 generic.go:334] "Generic (PLEG): container finished" podID="8525b54d-365c-4827-a2b9-629a000d149b" containerID="cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be" exitCode=0 Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.598541 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" event={"ID":"8525b54d-365c-4827-a2b9-629a000d149b","Type":"ContainerDied","Data":"cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be"} Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.598607 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" event={"ID":"8525b54d-365c-4827-a2b9-629a000d149b","Type":"ContainerDied","Data":"f8073dbfab6625432881e000ad04ab713609f41cbbc9bb86167db84c10abc0a9"} Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.598650 4746 scope.go:117] "RemoveContainer" containerID="cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.598692 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-shqnk" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.605110 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kw9sp" event={"ID":"95fab54f-52bc-4bff-8c9b-819cff12d9b9","Type":"ContainerStarted","Data":"da50a64eb3d099a0d58c7b3179dc8dfaf0b602712b78a50b31f172ba389a8a1a"} Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.609980 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" event={"ID":"afbb11e6-6530-4f6c-af30-b73180c8c725","Type":"ContainerStarted","Data":"28092c35d1d0497180f0b76766b92747d1712fdbc955cb55439f1adaeafc1792"} Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.613873 4746 generic.go:334] "Generic (PLEG): container finished" podID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" containerID="27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a" exitCode=0 Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.613948 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" event={"ID":"fd0ef0f9-a620-4711-84d4-8b3d21232f50","Type":"ContainerDied","Data":"27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a"} Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.613981 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" event={"ID":"fd0ef0f9-a620-4711-84d4-8b3d21232f50","Type":"ContainerDied","Data":"6ae9e457ccabdc3c27d036e8eeeaf495434e0f6cef2e5aa96277f42e6e84008f"} Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.614084 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-q5gf4" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.618450 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"01f4f65b-37fc-4500-a9ba-ba3a717c37bb","Type":"ContainerStarted","Data":"43c80aa4751de87ef85ba2b8cb1acb717adbc42b2be36282bac5e3439b8630af"} Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.619942 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jklpb" event={"ID":"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75","Type":"ContainerStarted","Data":"f567850e5468b502b135cc64e6ea030b75326b8c55fa10dbb56f69883e41295c"} Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.620012 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-jklpb" event={"ID":"bc3dc4dd-014a-42fe-a1e7-ee2d10866d75","Type":"ContainerStarted","Data":"20759a458930de7423b45ee043e810324653897582fdf8b77facbc11a7dc13d9"} Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.645855 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-jklpb" podStartSLOduration=1.645829628 podStartE2EDuration="1.645829628s" podCreationTimestamp="2025-12-11 10:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:10:48.639133898 +0000 UTC m=+1021.498997221" watchObservedRunningTime="2025-12-11 10:10:48.645829628 +0000 UTC m=+1021.505692961" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.651111 4746 scope.go:117] "RemoveContainer" containerID="fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.693006 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-shqnk"] Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.714695 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-shqnk"] Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.722549 4746 scope.go:117] "RemoveContainer" containerID="cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be" Dec 11 10:10:48 crc kubenswrapper[4746]: E1211 10:10:48.726158 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be\": container with ID starting with cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be not found: ID does not exist" containerID="cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.726204 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be"} err="failed to get container status \"cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be\": rpc error: code = NotFound desc = could not find container \"cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be\": container with ID starting with cb1443e8b2a4c9d18df031306fdcab6dcb74fd6a69eb29762c75402e945a96be not found: ID does not exist" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.726237 4746 scope.go:117] "RemoveContainer" containerID="fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd" Dec 11 10:10:48 crc kubenswrapper[4746]: E1211 10:10:48.726951 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd\": container with ID starting with fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd not found: ID does not exist" containerID="fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.727070 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd"} err="failed to get container status \"fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd\": rpc error: code = NotFound desc = could not find container \"fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd\": container with ID starting with fc14c1468d85d0d6739103c85194d0096ef70862a22652e935eeb5c1c58502fd not found: ID does not exist" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.727118 4746 scope.go:117] "RemoveContainer" containerID="27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.732767 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q5gf4"] Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.742406 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-q5gf4"] Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.757016 4746 scope.go:117] "RemoveContainer" containerID="741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.779862 4746 scope.go:117] "RemoveContainer" containerID="27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a" Dec 11 10:10:48 crc kubenswrapper[4746]: E1211 10:10:48.780325 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a\": container with ID starting with 27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a not found: ID does not exist" containerID="27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.780362 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a"} err="failed to get container status \"27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a\": rpc error: code = NotFound desc = could not find container \"27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a\": container with ID starting with 27a674a64c60f831b974ae07c7054efbb5175968dc7b20757c8070fc821a783a not found: ID does not exist" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.780384 4746 scope.go:117] "RemoveContainer" containerID="741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee" Dec 11 10:10:48 crc kubenswrapper[4746]: E1211 10:10:48.780758 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee\": container with ID starting with 741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee not found: ID does not exist" containerID="741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee" Dec 11 10:10:48 crc kubenswrapper[4746]: I1211 10:10:48.780794 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee"} err="failed to get container status \"741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee\": rpc error: code = NotFound desc = could not find container \"741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee\": container with ID starting with 741d6999c1ab225d18acd838af3857a34a71609eaf843b0afa61e9a3b59ddaee not found: ID does not exist" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.119092 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.308400 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dcqtn"] Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.352663 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ckzx7"] Dec 11 10:10:49 crc kubenswrapper[4746]: E1211 10:10:49.352986 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" containerName="dnsmasq-dns" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.353002 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" containerName="dnsmasq-dns" Dec 11 10:10:49 crc kubenswrapper[4746]: E1211 10:10:49.353017 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8525b54d-365c-4827-a2b9-629a000d149b" containerName="init" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.353024 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8525b54d-365c-4827-a2b9-629a000d149b" containerName="init" Dec 11 10:10:49 crc kubenswrapper[4746]: E1211 10:10:49.353073 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8525b54d-365c-4827-a2b9-629a000d149b" containerName="dnsmasq-dns" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.353082 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="8525b54d-365c-4827-a2b9-629a000d149b" containerName="dnsmasq-dns" Dec 11 10:10:49 crc kubenswrapper[4746]: E1211 10:10:49.353095 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" containerName="init" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.353101 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" containerName="init" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.353244 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" containerName="dnsmasq-dns" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.353263 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="8525b54d-365c-4827-a2b9-629a000d149b" containerName="dnsmasq-dns" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.362187 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.400549 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ckzx7"] Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.446892 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.447017 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9d4\" (UniqueName: \"kubernetes.io/projected/3ee5c96f-8699-41e0-9318-4a5ad8af233d-kube-api-access-pp9d4\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.447071 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.447097 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-config\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.447132 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.549035 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-config\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.549099 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.549154 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.549234 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9d4\" (UniqueName: \"kubernetes.io/projected/3ee5c96f-8699-41e0-9318-4a5ad8af233d-kube-api-access-pp9d4\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.549280 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.550106 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.550763 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-config\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.551676 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.552130 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.579696 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9d4\" (UniqueName: \"kubernetes.io/projected/3ee5c96f-8699-41e0-9318-4a5ad8af233d-kube-api-access-pp9d4\") pod \"dnsmasq-dns-b8fbc5445-ckzx7\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.643410 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8525b54d-365c-4827-a2b9-629a000d149b" path="/var/lib/kubelet/pods/8525b54d-365c-4827-a2b9-629a000d149b/volumes" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.644016 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd0ef0f9-a620-4711-84d4-8b3d21232f50" path="/var/lib/kubelet/pods/fd0ef0f9-a620-4711-84d4-8b3d21232f50/volumes" Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.644599 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kw9sp" event={"ID":"95fab54f-52bc-4bff-8c9b-819cff12d9b9","Type":"ContainerStarted","Data":"2f29bb532018d5a9fa46938673f14f60329c8aeb633a621f71351127b342d7fc"} Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.646565 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" event={"ID":"afbb11e6-6530-4f6c-af30-b73180c8c725","Type":"ContainerStarted","Data":"253974bfc19e510acee63e8847bfd63418521a462c7a9d48592171a97243ba52"} Dec 11 10:10:49 crc kubenswrapper[4746]: I1211 10:10:49.728582 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.257633 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ckzx7"] Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.362158 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.369414 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.371251 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.371795 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.372927 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.373992 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-mvxjx" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.387166 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.473275 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp2dt\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-kube-api-access-vp2dt\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.473638 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-lock\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.473733 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.474018 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-cache\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.474096 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.575971 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp2dt\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-kube-api-access-vp2dt\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.576057 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-lock\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.576097 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.576219 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-cache\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.576247 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: E1211 10:10:50.576564 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:10:50 crc kubenswrapper[4746]: E1211 10:10:50.576647 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.576761 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: E1211 10:10:50.576769 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift podName:3df27f8b-76bd-441d-9c3a-2b8bd1f250c7 nodeName:}" failed. No retries permitted until 2025-12-11 10:10:51.076747945 +0000 UTC m=+1023.936611258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift") pod "swift-storage-0" (UID: "3df27f8b-76bd-441d-9c3a-2b8bd1f250c7") : configmap "swift-ring-files" not found Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.576919 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-lock\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.576952 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-cache\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.596033 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp2dt\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-kube-api-access-vp2dt\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.600001 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.661586 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" event={"ID":"3ee5c96f-8699-41e0-9318-4a5ad8af233d","Type":"ContainerStarted","Data":"75b8e6889e5a451dc842f80f1587328823d91971a37a0031db509404796cfd9d"} Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.663555 4746 generic.go:334] "Generic (PLEG): container finished" podID="95fab54f-52bc-4bff-8c9b-819cff12d9b9" containerID="2f29bb532018d5a9fa46938673f14f60329c8aeb633a621f71351127b342d7fc" exitCode=0 Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.663634 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kw9sp" event={"ID":"95fab54f-52bc-4bff-8c9b-819cff12d9b9","Type":"ContainerDied","Data":"2f29bb532018d5a9fa46938673f14f60329c8aeb633a621f71351127b342d7fc"} Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.665431 4746 generic.go:334] "Generic (PLEG): container finished" podID="afbb11e6-6530-4f6c-af30-b73180c8c725" containerID="253974bfc19e510acee63e8847bfd63418521a462c7a9d48592171a97243ba52" exitCode=0 Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.665502 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" event={"ID":"afbb11e6-6530-4f6c-af30-b73180c8c725","Type":"ContainerDied","Data":"253974bfc19e510acee63e8847bfd63418521a462c7a9d48592171a97243ba52"} Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.949977 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-7lhsc"] Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.951743 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.954330 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.956183 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.958765 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 11 10:10:50 crc kubenswrapper[4746]: I1211 10:10:50.991096 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7lhsc"] Dec 11 10:10:50 crc kubenswrapper[4746]: E1211 10:10:50.992108 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-wm4wb ring-data-devices scripts swiftconf], unattached volumes=[], failed to process volumes=[combined-ca-bundle dispersionconf etc-swift kube-api-access-wm4wb ring-data-devices scripts swiftconf]: context canceled" pod="openstack/swift-ring-rebalance-7lhsc" podUID="22ca9c19-c1f1-4fb4-b261-f49c8b354ff2" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.008631 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wrnch"] Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.010372 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.021332 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7lhsc"] Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.028340 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wrnch"] Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.037976 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.084116 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-ovsdbserver-sb\") pod \"afbb11e6-6530-4f6c-af30-b73180c8c725\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.084266 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-config\") pod \"afbb11e6-6530-4f6c-af30-b73180c8c725\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.084453 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvmqq\" (UniqueName: \"kubernetes.io/projected/afbb11e6-6530-4f6c-af30-b73180c8c725-kube-api-access-fvmqq\") pod \"afbb11e6-6530-4f6c-af30-b73180c8c725\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.084547 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-dns-svc\") pod \"afbb11e6-6530-4f6c-af30-b73180c8c725\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.084823 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-ring-data-devices\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.084875 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-combined-ca-bundle\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.084910 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jfqm\" (UniqueName: \"kubernetes.io/projected/94f9d09a-c638-4da1-a6e0-3337621da894-kube-api-access-2jfqm\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.084947 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-etc-swift\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085021 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-dispersionconf\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085086 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/94f9d09a-c638-4da1-a6e0-3337621da894-etc-swift\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085113 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-combined-ca-bundle\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085141 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-swiftconf\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085264 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-swiftconf\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085316 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm4wb\" (UniqueName: \"kubernetes.io/projected/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-kube-api-access-wm4wb\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085378 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-scripts\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085409 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-scripts\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085459 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085495 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-dispersionconf\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.085525 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-ring-data-devices\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: E1211 10:10:51.085915 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:10:51 crc kubenswrapper[4746]: E1211 10:10:51.085942 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:10:51 crc kubenswrapper[4746]: E1211 10:10:51.086002 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift podName:3df27f8b-76bd-441d-9c3a-2b8bd1f250c7 nodeName:}" failed. No retries permitted until 2025-12-11 10:10:52.085982834 +0000 UTC m=+1024.945846147 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift") pod "swift-storage-0" (UID: "3df27f8b-76bd-441d-9c3a-2b8bd1f250c7") : configmap "swift-ring-files" not found Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.090705 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afbb11e6-6530-4f6c-af30-b73180c8c725-kube-api-access-fvmqq" (OuterVolumeSpecName: "kube-api-access-fvmqq") pod "afbb11e6-6530-4f6c-af30-b73180c8c725" (UID: "afbb11e6-6530-4f6c-af30-b73180c8c725"). InnerVolumeSpecName "kube-api-access-fvmqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.114081 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-config" (OuterVolumeSpecName: "config") pod "afbb11e6-6530-4f6c-af30-b73180c8c725" (UID: "afbb11e6-6530-4f6c-af30-b73180c8c725"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:51 crc kubenswrapper[4746]: E1211 10:10:51.131672 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-ovsdbserver-sb podName:afbb11e6-6530-4f6c-af30-b73180c8c725 nodeName:}" failed. No retries permitted until 2025-12-11 10:10:51.631635461 +0000 UTC m=+1024.491498774 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ovsdbserver-sb" (UniqueName: "kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-ovsdbserver-sb") pod "afbb11e6-6530-4f6c-af30-b73180c8c725" (UID: "afbb11e6-6530-4f6c-af30-b73180c8c725") : error deleting /var/lib/kubelet/pods/afbb11e6-6530-4f6c-af30-b73180c8c725/volume-subpaths: remove /var/lib/kubelet/pods/afbb11e6-6530-4f6c-af30-b73180c8c725/volume-subpaths: no such file or directory Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.132066 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "afbb11e6-6530-4f6c-af30-b73180c8c725" (UID: "afbb11e6-6530-4f6c-af30-b73180c8c725"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.187766 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm4wb\" (UniqueName: \"kubernetes.io/projected/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-kube-api-access-wm4wb\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.187838 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-scripts\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.187858 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-scripts\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.187902 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-dispersionconf\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.187920 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-ring-data-devices\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.187946 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-ring-data-devices\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.187971 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-combined-ca-bundle\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.187992 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jfqm\" (UniqueName: \"kubernetes.io/projected/94f9d09a-c638-4da1-a6e0-3337621da894-kube-api-access-2jfqm\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.188013 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-etc-swift\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.188073 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-dispersionconf\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.188094 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/94f9d09a-c638-4da1-a6e0-3337621da894-etc-swift\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.188109 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-combined-ca-bundle\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.188125 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-swiftconf\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.188150 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-swiftconf\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.188190 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.188202 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvmqq\" (UniqueName: \"kubernetes.io/projected/afbb11e6-6530-4f6c-af30-b73180c8c725-kube-api-access-fvmqq\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.188211 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.188654 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-scripts\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.189150 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-ring-data-devices\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.189432 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-scripts\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.189457 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-ring-data-devices\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.189464 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/94f9d09a-c638-4da1-a6e0-3337621da894-etc-swift\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.189783 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-etc-swift\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.192908 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-dispersionconf\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.193438 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-swiftconf\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.194510 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-swiftconf\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.195700 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-combined-ca-bundle\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.200746 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-dispersionconf\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.208762 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-combined-ca-bundle\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.209351 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jfqm\" (UniqueName: \"kubernetes.io/projected/94f9d09a-c638-4da1-a6e0-3337621da894-kube-api-access-2jfqm\") pod \"swift-ring-rebalance-wrnch\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.213415 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm4wb\" (UniqueName: \"kubernetes.io/projected/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-kube-api-access-wm4wb\") pod \"swift-ring-rebalance-7lhsc\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.332605 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.701697 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-ovsdbserver-sb\") pod \"afbb11e6-6530-4f6c-af30-b73180c8c725\" (UID: \"afbb11e6-6530-4f6c-af30-b73180c8c725\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.705746 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "afbb11e6-6530-4f6c-af30-b73180c8c725" (UID: "afbb11e6-6530-4f6c-af30-b73180c8c725"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.760644 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" event={"ID":"afbb11e6-6530-4f6c-af30-b73180c8c725","Type":"ContainerDied","Data":"28092c35d1d0497180f0b76766b92747d1712fdbc955cb55439f1adaeafc1792"} Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.760736 4746 scope.go:117] "RemoveContainer" containerID="253974bfc19e510acee63e8847bfd63418521a462c7a9d48592171a97243ba52" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.760948 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-dcqtn" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.761312 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.803372 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.803822 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/afbb11e6-6530-4f6c-af30-b73180c8c725-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.894741 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dcqtn"] Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.906164 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-dispersionconf\") pod \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.906276 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-scripts\") pod \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.906312 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-combined-ca-bundle\") pod \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.906338 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-swiftconf\") pod \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.906389 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-etc-swift\") pod \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.906409 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm4wb\" (UniqueName: \"kubernetes.io/projected/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-kube-api-access-wm4wb\") pod \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.906461 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-ring-data-devices\") pod \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\" (UID: \"22ca9c19-c1f1-4fb4-b261-f49c8b354ff2\") " Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.906808 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2" (UID: "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.907167 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-scripts" (OuterVolumeSpecName: "scripts") pod "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2" (UID: "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.907904 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2" (UID: "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.908562 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.908584 4746 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.908596 4746 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.914354 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-dcqtn"] Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.938817 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2" (UID: "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.938873 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2" (UID: "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.938620 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2" (UID: "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:10:51 crc kubenswrapper[4746]: I1211 10:10:51.938972 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-kube-api-access-wm4wb" (OuterVolumeSpecName: "kube-api-access-wm4wb") pod "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2" (UID: "22ca9c19-c1f1-4fb4-b261-f49c8b354ff2"). InnerVolumeSpecName "kube-api-access-wm4wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.010036 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm4wb\" (UniqueName: \"kubernetes.io/projected/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-kube-api-access-wm4wb\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.010090 4746 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.010102 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.010112 4746 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.053144 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wrnch"] Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.111582 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:52 crc kubenswrapper[4746]: E1211 10:10:52.111803 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:10:52 crc kubenswrapper[4746]: E1211 10:10:52.111822 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:10:52 crc kubenswrapper[4746]: E1211 10:10:52.111895 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift podName:3df27f8b-76bd-441d-9c3a-2b8bd1f250c7 nodeName:}" failed. No retries permitted until 2025-12-11 10:10:54.111853051 +0000 UTC m=+1026.971716364 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift") pod "swift-storage-0" (UID: "3df27f8b-76bd-441d-9c3a-2b8bd1f250c7") : configmap "swift-ring-files" not found Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.783494 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kw9sp" event={"ID":"95fab54f-52bc-4bff-8c9b-819cff12d9b9","Type":"ContainerStarted","Data":"6c2c73311130601d6a838467895e9498b867e36df24cb16c572c17763b7c026c"} Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.784038 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.787087 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wrnch" event={"ID":"94f9d09a-c638-4da1-a6e0-3337621da894","Type":"ContainerStarted","Data":"671d1f224673bb0aae74a2644160c37badcb7229d1ee945cc4ee6f74a16253c1"} Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.788968 4746 generic.go:334] "Generic (PLEG): container finished" podID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerID="5c72510cfc8afdcad0e1236396d77ea53bbd36f8a53211f72a7c63ef79ec23fc" exitCode=0 Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.789086 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-7lhsc" Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.795935 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" event={"ID":"3ee5c96f-8699-41e0-9318-4a5ad8af233d","Type":"ContainerDied","Data":"5c72510cfc8afdcad0e1236396d77ea53bbd36f8a53211f72a7c63ef79ec23fc"} Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.805776 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-kw9sp" podStartSLOduration=5.805761775 podStartE2EDuration="5.805761775s" podCreationTimestamp="2025-12-11 10:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:10:52.799127016 +0000 UTC m=+1025.658990329" watchObservedRunningTime="2025-12-11 10:10:52.805761775 +0000 UTC m=+1025.665625088" Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.911256 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-7lhsc"] Dec 11 10:10:52 crc kubenswrapper[4746]: I1211 10:10:52.925522 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-7lhsc"] Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.224120 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m5fpx"] Dec 11 10:10:53 crc kubenswrapper[4746]: E1211 10:10:53.224703 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afbb11e6-6530-4f6c-af30-b73180c8c725" containerName="init" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.224716 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="afbb11e6-6530-4f6c-af30-b73180c8c725" containerName="init" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.224854 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="afbb11e6-6530-4f6c-af30-b73180c8c725" containerName="init" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.226033 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.233789 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-utilities\") pod \"redhat-operators-m5fpx\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.233848 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76jgd\" (UniqueName: \"kubernetes.io/projected/69a05db9-bdf5-4141-a267-932c862a4ca3-kube-api-access-76jgd\") pod \"redhat-operators-m5fpx\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.233878 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-catalog-content\") pod \"redhat-operators-m5fpx\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.244099 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m5fpx"] Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.335302 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-utilities\") pod \"redhat-operators-m5fpx\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.335356 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76jgd\" (UniqueName: \"kubernetes.io/projected/69a05db9-bdf5-4141-a267-932c862a4ca3-kube-api-access-76jgd\") pod \"redhat-operators-m5fpx\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.335385 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-catalog-content\") pod \"redhat-operators-m5fpx\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.336005 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-utilities\") pod \"redhat-operators-m5fpx\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.336081 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-catalog-content\") pod \"redhat-operators-m5fpx\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.366388 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76jgd\" (UniqueName: \"kubernetes.io/projected/69a05db9-bdf5-4141-a267-932c862a4ca3-kube-api-access-76jgd\") pod \"redhat-operators-m5fpx\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.549388 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.641969 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22ca9c19-c1f1-4fb4-b261-f49c8b354ff2" path="/var/lib/kubelet/pods/22ca9c19-c1f1-4fb4-b261-f49c8b354ff2/volumes" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.642803 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afbb11e6-6530-4f6c-af30-b73180c8c725" path="/var/lib/kubelet/pods/afbb11e6-6530-4f6c-af30-b73180c8c725/volumes" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.702366 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.702674 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.768441 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.827959 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"01f4f65b-37fc-4500-a9ba-ba3a717c37bb","Type":"ContainerStarted","Data":"560b85321667c5d7c6edf0459da0428539a6ce1dd830381b50bfbeb0b7d79fd3"} Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.831177 4746 generic.go:334] "Generic (PLEG): container finished" podID="f35f21ce-59cb-4ee0-850c-9aba4010c890" containerID="9313ab0b190c80e177d86fab3632ae2696d87ce29a2101861d234a040a5ac99d" exitCode=0 Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.831234 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f35f21ce-59cb-4ee0-850c-9aba4010c890","Type":"ContainerDied","Data":"9313ab0b190c80e177d86fab3632ae2696d87ce29a2101861d234a040a5ac99d"} Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.922219 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" event={"ID":"3ee5c96f-8699-41e0-9318-4a5ad8af233d","Type":"ContainerStarted","Data":"906f65b562849c513e3290bbefd48980bea8ec54dc852a875a265e6f1ceeade8"} Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.922341 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:53 crc kubenswrapper[4746]: I1211 10:10:53.975427 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" podStartSLOduration=4.975406017 podStartE2EDuration="4.975406017s" podCreationTimestamp="2025-12-11 10:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:10:53.954186606 +0000 UTC m=+1026.814049929" watchObservedRunningTime="2025-12-11 10:10:53.975406017 +0000 UTC m=+1026.835269330" Dec 11 10:10:54 crc kubenswrapper[4746]: I1211 10:10:54.009565 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:54 crc kubenswrapper[4746]: I1211 10:10:54.120562 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:54 crc kubenswrapper[4746]: E1211 10:10:54.120706 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:10:54 crc kubenswrapper[4746]: E1211 10:10:54.120721 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:10:54 crc kubenswrapper[4746]: E1211 10:10:54.120769 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift podName:3df27f8b-76bd-441d-9c3a-2b8bd1f250c7 nodeName:}" failed. No retries permitted until 2025-12-11 10:10:58.120750904 +0000 UTC m=+1030.980614277 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift") pod "swift-storage-0" (UID: "3df27f8b-76bd-441d-9c3a-2b8bd1f250c7") : configmap "swift-ring-files" not found Dec 11 10:10:54 crc kubenswrapper[4746]: I1211 10:10:54.172061 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m5fpx"] Dec 11 10:10:54 crc kubenswrapper[4746]: W1211 10:10:54.183917 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69a05db9_bdf5_4141_a267_932c862a4ca3.slice/crio-78adc86475dd2568dcb0ac3aea9842586835d66530c2419d4dda1f8ef141f672 WatchSource:0}: Error finding container 78adc86475dd2568dcb0ac3aea9842586835d66530c2419d4dda1f8ef141f672: Status 404 returned error can't find the container with id 78adc86475dd2568dcb0ac3aea9842586835d66530c2419d4dda1f8ef141f672 Dec 11 10:10:54 crc kubenswrapper[4746]: I1211 10:10:54.931952 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"f35f21ce-59cb-4ee0-850c-9aba4010c890","Type":"ContainerStarted","Data":"aeada020065aa8acfccb5962024b7b41784467fdaa975695f3220050fa564e0a"} Dec 11 10:10:54 crc kubenswrapper[4746]: I1211 10:10:54.935317 4746 generic.go:334] "Generic (PLEG): container finished" podID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerID="28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e" exitCode=0 Dec 11 10:10:54 crc kubenswrapper[4746]: I1211 10:10:54.935406 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5fpx" event={"ID":"69a05db9-bdf5-4141-a267-932c862a4ca3","Type":"ContainerDied","Data":"28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e"} Dec 11 10:10:54 crc kubenswrapper[4746]: I1211 10:10:54.935440 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5fpx" event={"ID":"69a05db9-bdf5-4141-a267-932c862a4ca3","Type":"ContainerStarted","Data":"78adc86475dd2568dcb0ac3aea9842586835d66530c2419d4dda1f8ef141f672"} Dec 11 10:10:54 crc kubenswrapper[4746]: I1211 10:10:54.939850 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"01f4f65b-37fc-4500-a9ba-ba3a717c37bb","Type":"ContainerStarted","Data":"74c7c9ddcd32086d7f352741b4dbf5cfb096d6dbfabb60ec7900610f68cd16b8"} Dec 11 10:10:54 crc kubenswrapper[4746]: I1211 10:10:54.940170 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 11 10:10:54 crc kubenswrapper[4746]: I1211 10:10:54.959021 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371974.895771 podStartE2EDuration="1m1.959003908s" podCreationTimestamp="2025-12-11 10:09:53 +0000 UTC" firstStartedPulling="2025-12-11 10:09:55.966148621 +0000 UTC m=+968.826011934" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:10:54.954704073 +0000 UTC m=+1027.814567386" watchObservedRunningTime="2025-12-11 10:10:54.959003908 +0000 UTC m=+1027.818867221" Dec 11 10:10:55 crc kubenswrapper[4746]: I1211 10:10:55.005743 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.08255883 podStartE2EDuration="8.005723464s" podCreationTimestamp="2025-12-11 10:10:47 +0000 UTC" firstStartedPulling="2025-12-11 10:10:48.494622343 +0000 UTC m=+1021.354485656" lastFinishedPulling="2025-12-11 10:10:53.417786977 +0000 UTC m=+1026.277650290" observedRunningTime="2025-12-11 10:10:55.002461516 +0000 UTC m=+1027.862324829" watchObservedRunningTime="2025-12-11 10:10:55.005723464 +0000 UTC m=+1027.865586777" Dec 11 10:10:55 crc kubenswrapper[4746]: I1211 10:10:55.364565 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 11 10:10:55 crc kubenswrapper[4746]: I1211 10:10:55.364811 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 11 10:10:55 crc kubenswrapper[4746]: I1211 10:10:55.416182 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:55 crc kubenswrapper[4746]: I1211 10:10:55.467729 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:56 crc kubenswrapper[4746]: I1211 10:10:56.003099 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lz68"] Dec 11 10:10:56 crc kubenswrapper[4746]: I1211 10:10:56.863700 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 11 10:10:56 crc kubenswrapper[4746]: I1211 10:10:56.863739 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 11 10:10:56 crc kubenswrapper[4746]: I1211 10:10:56.943228 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 11 10:10:56 crc kubenswrapper[4746]: I1211 10:10:56.954419 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4lz68" podUID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerName="registry-server" containerID="cri-o://b009ffb61d92327f18265cec8e84e17573f7719c4af6c0cd09881f481b774605" gracePeriod=2 Dec 11 10:10:57 crc kubenswrapper[4746]: I1211 10:10:57.050428 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 11 10:10:57 crc kubenswrapper[4746]: I1211 10:10:57.668297 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:10:57 crc kubenswrapper[4746]: I1211 10:10:57.985789 4746 generic.go:334] "Generic (PLEG): container finished" podID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerID="b009ffb61d92327f18265cec8e84e17573f7719c4af6c0cd09881f481b774605" exitCode=0 Dec 11 10:10:57 crc kubenswrapper[4746]: I1211 10:10:57.986620 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lz68" event={"ID":"877b3463-e822-4f57-b9f2-afbcbda4a044","Type":"ContainerDied","Data":"b009ffb61d92327f18265cec8e84e17573f7719c4af6c0cd09881f481b774605"} Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.168193 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.207964 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:10:58 crc kubenswrapper[4746]: E1211 10:10:58.208191 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:10:58 crc kubenswrapper[4746]: E1211 10:10:58.208210 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:10:58 crc kubenswrapper[4746]: E1211 10:10:58.208293 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift podName:3df27f8b-76bd-441d-9c3a-2b8bd1f250c7 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:06.208272484 +0000 UTC m=+1039.068135797 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift") pod "swift-storage-0" (UID: "3df27f8b-76bd-441d-9c3a-2b8bd1f250c7") : configmap "swift-ring-files" not found Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.309662 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-utilities\") pod \"877b3463-e822-4f57-b9f2-afbcbda4a044\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.310036 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-catalog-content\") pod \"877b3463-e822-4f57-b9f2-afbcbda4a044\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.310186 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2gbc\" (UniqueName: \"kubernetes.io/projected/877b3463-e822-4f57-b9f2-afbcbda4a044-kube-api-access-f2gbc\") pod \"877b3463-e822-4f57-b9f2-afbcbda4a044\" (UID: \"877b3463-e822-4f57-b9f2-afbcbda4a044\") " Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.310639 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-utilities" (OuterVolumeSpecName: "utilities") pod "877b3463-e822-4f57-b9f2-afbcbda4a044" (UID: "877b3463-e822-4f57-b9f2-afbcbda4a044"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.310746 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.316228 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/877b3463-e822-4f57-b9f2-afbcbda4a044-kube-api-access-f2gbc" (OuterVolumeSpecName: "kube-api-access-f2gbc") pod "877b3463-e822-4f57-b9f2-afbcbda4a044" (UID: "877b3463-e822-4f57-b9f2-afbcbda4a044"). InnerVolumeSpecName "kube-api-access-f2gbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.328321 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "877b3463-e822-4f57-b9f2-afbcbda4a044" (UID: "877b3463-e822-4f57-b9f2-afbcbda4a044"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.405363 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rp4z9"] Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.405878 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rp4z9" podUID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerName="registry-server" containerID="cri-o://ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4" gracePeriod=2 Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.412497 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/877b3463-e822-4f57-b9f2-afbcbda4a044-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:58 crc kubenswrapper[4746]: I1211 10:10:58.412524 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2gbc\" (UniqueName: \"kubernetes.io/projected/877b3463-e822-4f57-b9f2-afbcbda4a044-kube-api-access-f2gbc\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.004741 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.005231 4746 generic.go:334] "Generic (PLEG): container finished" podID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerID="ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4" exitCode=0 Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.005302 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4z9" event={"ID":"87f5deb8-7a92-4b45-b818-8fa95241290b","Type":"ContainerDied","Data":"ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4"} Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.005332 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rp4z9" event={"ID":"87f5deb8-7a92-4b45-b818-8fa95241290b","Type":"ContainerDied","Data":"75313f4361a64369b8e1535a21a05273ae42fa387caa1934f4f2c3ed990897fe"} Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.005354 4746 scope.go:117] "RemoveContainer" containerID="ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.007951 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wrnch" event={"ID":"94f9d09a-c638-4da1-a6e0-3337621da894","Type":"ContainerStarted","Data":"6d1cfd8f16470f3a55ad3855405cca6ade3c37e6315f16c99f162cadd53ae916"} Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.011523 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5fpx" event={"ID":"69a05db9-bdf5-4141-a267-932c862a4ca3","Type":"ContainerStarted","Data":"4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3"} Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.016227 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lz68" event={"ID":"877b3463-e822-4f57-b9f2-afbcbda4a044","Type":"ContainerDied","Data":"9c4109aba2691d21ea6194c02d48c0349de02e8966f5fce918d5dbb36153c184"} Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.016294 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lz68" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.055361 4746 scope.go:117] "RemoveContainer" containerID="990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.078416 4746 scope.go:117] "RemoveContainer" containerID="8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.079572 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wrnch" podStartSLOduration=3.170558211 podStartE2EDuration="9.079558396s" podCreationTimestamp="2025-12-11 10:10:50 +0000 UTC" firstStartedPulling="2025-12-11 10:10:52.066289666 +0000 UTC m=+1024.926152989" lastFinishedPulling="2025-12-11 10:10:57.975289871 +0000 UTC m=+1030.835153174" observedRunningTime="2025-12-11 10:10:59.076700269 +0000 UTC m=+1031.936563592" watchObservedRunningTime="2025-12-11 10:10:59.079558396 +0000 UTC m=+1031.939421709" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.099242 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lz68"] Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.107684 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lz68"] Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.115499 4746 scope.go:117] "RemoveContainer" containerID="ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4" Dec 11 10:10:59 crc kubenswrapper[4746]: E1211 10:10:59.115977 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4\": container with ID starting with ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4 not found: ID does not exist" containerID="ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.116009 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4"} err="failed to get container status \"ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4\": rpc error: code = NotFound desc = could not find container \"ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4\": container with ID starting with ac9b0641044d66d996b2dc76b82830e4e9c2a27ca04c3715acde83b69caecad4 not found: ID does not exist" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.116028 4746 scope.go:117] "RemoveContainer" containerID="990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146" Dec 11 10:10:59 crc kubenswrapper[4746]: E1211 10:10:59.116335 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146\": container with ID starting with 990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146 not found: ID does not exist" containerID="990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.116356 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146"} err="failed to get container status \"990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146\": rpc error: code = NotFound desc = could not find container \"990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146\": container with ID starting with 990fb98065975c11b34a9170c886cb39f261272a9a69d0ab296652113925c146 not found: ID does not exist" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.116370 4746 scope.go:117] "RemoveContainer" containerID="8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc" Dec 11 10:10:59 crc kubenswrapper[4746]: E1211 10:10:59.116575 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc\": container with ID starting with 8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc not found: ID does not exist" containerID="8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.116613 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc"} err="failed to get container status \"8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc\": rpc error: code = NotFound desc = could not find container \"8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc\": container with ID starting with 8e3d351235b69ab26116cb06c1d3b221d667109f1ab79fbeb22b86d8879240dc not found: ID does not exist" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.116629 4746 scope.go:117] "RemoveContainer" containerID="b009ffb61d92327f18265cec8e84e17573f7719c4af6c0cd09881f481b774605" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.132932 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-utilities\") pod \"87f5deb8-7a92-4b45-b818-8fa95241290b\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.133107 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktgbp\" (UniqueName: \"kubernetes.io/projected/87f5deb8-7a92-4b45-b818-8fa95241290b-kube-api-access-ktgbp\") pod \"87f5deb8-7a92-4b45-b818-8fa95241290b\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.133957 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-utilities" (OuterVolumeSpecName: "utilities") pod "87f5deb8-7a92-4b45-b818-8fa95241290b" (UID: "87f5deb8-7a92-4b45-b818-8fa95241290b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.134588 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-catalog-content\") pod \"87f5deb8-7a92-4b45-b818-8fa95241290b\" (UID: \"87f5deb8-7a92-4b45-b818-8fa95241290b\") " Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.135142 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.143014 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f5deb8-7a92-4b45-b818-8fa95241290b-kube-api-access-ktgbp" (OuterVolumeSpecName: "kube-api-access-ktgbp") pod "87f5deb8-7a92-4b45-b818-8fa95241290b" (UID: "87f5deb8-7a92-4b45-b818-8fa95241290b"). InnerVolumeSpecName "kube-api-access-ktgbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.152349 4746 scope.go:117] "RemoveContainer" containerID="e7570b3a9b3edf605397d6f2ff4875be391223f38b441ccd4763d4206f348eca" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.173368 4746 scope.go:117] "RemoveContainer" containerID="66cc335ad627230fce1d91ad64bdabd2b34bea6896f815744763a64f9a8829e0" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.210237 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f5deb8-7a92-4b45-b818-8fa95241290b" (UID: "87f5deb8-7a92-4b45-b818-8fa95241290b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.236285 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f5deb8-7a92-4b45-b818-8fa95241290b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.236334 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktgbp\" (UniqueName: \"kubernetes.io/projected/87f5deb8-7a92-4b45-b818-8fa95241290b-kube-api-access-ktgbp\") on node \"crc\" DevicePath \"\"" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.639971 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="877b3463-e822-4f57-b9f2-afbcbda4a044" path="/var/lib/kubelet/pods/877b3463-e822-4f57-b9f2-afbcbda4a044/volumes" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.730487 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.809013 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kw9sp"] Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.810261 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-kw9sp" podUID="95fab54f-52bc-4bff-8c9b-819cff12d9b9" containerName="dnsmasq-dns" containerID="cri-o://6c2c73311130601d6a838467895e9498b867e36df24cb16c572c17763b7c026c" gracePeriod=10 Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.877403 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:10:59 crc kubenswrapper[4746]: I1211 10:10:59.877476 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:11:00 crc kubenswrapper[4746]: I1211 10:11:00.028463 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rp4z9" Dec 11 10:11:00 crc kubenswrapper[4746]: I1211 10:11:00.051268 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rp4z9"] Dec 11 10:11:00 crc kubenswrapper[4746]: I1211 10:11:00.059616 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rp4z9"] Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.038316 4746 generic.go:334] "Generic (PLEG): container finished" podID="95fab54f-52bc-4bff-8c9b-819cff12d9b9" containerID="6c2c73311130601d6a838467895e9498b867e36df24cb16c572c17763b7c026c" exitCode=0 Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.038378 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kw9sp" event={"ID":"95fab54f-52bc-4bff-8c9b-819cff12d9b9","Type":"ContainerDied","Data":"6c2c73311130601d6a838467895e9498b867e36df24cb16c572c17763b7c026c"} Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.041387 4746 generic.go:334] "Generic (PLEG): container finished" podID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerID="4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3" exitCode=0 Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.041428 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5fpx" event={"ID":"69a05db9-bdf5-4141-a267-932c862a4ca3","Type":"ContainerDied","Data":"4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3"} Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.327838 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.372848 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-config\") pod \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.372997 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7b7m\" (UniqueName: \"kubernetes.io/projected/95fab54f-52bc-4bff-8c9b-819cff12d9b9-kube-api-access-c7b7m\") pod \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.373038 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-nb\") pod \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.373114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-sb\") pod \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.373131 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-dns-svc\") pod \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\" (UID: \"95fab54f-52bc-4bff-8c9b-819cff12d9b9\") " Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.388884 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95fab54f-52bc-4bff-8c9b-819cff12d9b9-kube-api-access-c7b7m" (OuterVolumeSpecName: "kube-api-access-c7b7m") pod "95fab54f-52bc-4bff-8c9b-819cff12d9b9" (UID: "95fab54f-52bc-4bff-8c9b-819cff12d9b9"). InnerVolumeSpecName "kube-api-access-c7b7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.428728 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "95fab54f-52bc-4bff-8c9b-819cff12d9b9" (UID: "95fab54f-52bc-4bff-8c9b-819cff12d9b9"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.435676 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-config" (OuterVolumeSpecName: "config") pod "95fab54f-52bc-4bff-8c9b-819cff12d9b9" (UID: "95fab54f-52bc-4bff-8c9b-819cff12d9b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.438454 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "95fab54f-52bc-4bff-8c9b-819cff12d9b9" (UID: "95fab54f-52bc-4bff-8c9b-819cff12d9b9"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.450536 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "95fab54f-52bc-4bff-8c9b-819cff12d9b9" (UID: "95fab54f-52bc-4bff-8c9b-819cff12d9b9"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.475360 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.475401 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7b7m\" (UniqueName: \"kubernetes.io/projected/95fab54f-52bc-4bff-8c9b-819cff12d9b9-kube-api-access-c7b7m\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.475414 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.475425 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.475437 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/95fab54f-52bc-4bff-8c9b-819cff12d9b9-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.642098 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f5deb8-7a92-4b45-b818-8fa95241290b" path="/var/lib/kubelet/pods/87f5deb8-7a92-4b45-b818-8fa95241290b/volumes" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.677381 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 11 10:11:01 crc kubenswrapper[4746]: I1211 10:11:01.751337 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 11 10:11:02 crc kubenswrapper[4746]: I1211 10:11:02.051661 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-kw9sp" event={"ID":"95fab54f-52bc-4bff-8c9b-819cff12d9b9","Type":"ContainerDied","Data":"da50a64eb3d099a0d58c7b3179dc8dfaf0b602712b78a50b31f172ba389a8a1a"} Dec 11 10:11:02 crc kubenswrapper[4746]: I1211 10:11:02.051684 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-kw9sp" Dec 11 10:11:02 crc kubenswrapper[4746]: I1211 10:11:02.052058 4746 scope.go:117] "RemoveContainer" containerID="6c2c73311130601d6a838467895e9498b867e36df24cb16c572c17763b7c026c" Dec 11 10:11:02 crc kubenswrapper[4746]: I1211 10:11:02.076947 4746 scope.go:117] "RemoveContainer" containerID="2f29bb532018d5a9fa46938673f14f60329c8aeb633a621f71351127b342d7fc" Dec 11 10:11:02 crc kubenswrapper[4746]: I1211 10:11:02.079296 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kw9sp"] Dec 11 10:11:02 crc kubenswrapper[4746]: I1211 10:11:02.085799 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-kw9sp"] Dec 11 10:11:03 crc kubenswrapper[4746]: I1211 10:11:03.063344 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5fpx" event={"ID":"69a05db9-bdf5-4141-a267-932c862a4ca3","Type":"ContainerStarted","Data":"79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55"} Dec 11 10:11:03 crc kubenswrapper[4746]: I1211 10:11:03.087778 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m5fpx" podStartSLOduration=2.915740807 podStartE2EDuration="10.087761124s" podCreationTimestamp="2025-12-11 10:10:53 +0000 UTC" firstStartedPulling="2025-12-11 10:10:54.936429631 +0000 UTC m=+1027.796292944" lastFinishedPulling="2025-12-11 10:11:02.108449948 +0000 UTC m=+1034.968313261" observedRunningTime="2025-12-11 10:11:03.081804814 +0000 UTC m=+1035.941668127" watchObservedRunningTime="2025-12-11 10:11:03.087761124 +0000 UTC m=+1035.947624457" Dec 11 10:11:03 crc kubenswrapper[4746]: I1211 10:11:03.550561 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:11:03 crc kubenswrapper[4746]: I1211 10:11:03.550627 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:11:03 crc kubenswrapper[4746]: I1211 10:11:03.640322 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95fab54f-52bc-4bff-8c9b-819cff12d9b9" path="/var/lib/kubelet/pods/95fab54f-52bc-4bff-8c9b-819cff12d9b9/volumes" Dec 11 10:11:04 crc kubenswrapper[4746]: I1211 10:11:04.595424 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m5fpx" podUID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerName="registry-server" probeResult="failure" output=< Dec 11 10:11:04 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Dec 11 10:11:04 crc kubenswrapper[4746]: > Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.258975 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.259162 4746 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.259349 4746 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.259400 4746 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift podName:3df27f8b-76bd-441d-9c3a-2b8bd1f250c7 nodeName:}" failed. No retries permitted until 2025-12-11 10:11:22.259385564 +0000 UTC m=+1055.119248877 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift") pod "swift-storage-0" (UID: "3df27f8b-76bd-441d-9c3a-2b8bd1f250c7") : configmap "swift-ring-files" not found Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.623470 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-d52e-account-create-update-d8ngl"] Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.624019 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerName="extract-utilities" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624063 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerName="extract-utilities" Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.624083 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerName="extract-utilities" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624096 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerName="extract-utilities" Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.624126 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fab54f-52bc-4bff-8c9b-819cff12d9b9" containerName="dnsmasq-dns" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624136 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fab54f-52bc-4bff-8c9b-819cff12d9b9" containerName="dnsmasq-dns" Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.624164 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerName="registry-server" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624174 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerName="registry-server" Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.624190 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerName="registry-server" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624201 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerName="registry-server" Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.624218 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerName="extract-content" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624227 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerName="extract-content" Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.624238 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerName="extract-content" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624247 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerName="extract-content" Dec 11 10:11:06 crc kubenswrapper[4746]: E1211 10:11:06.624261 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95fab54f-52bc-4bff-8c9b-819cff12d9b9" containerName="init" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624270 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="95fab54f-52bc-4bff-8c9b-819cff12d9b9" containerName="init" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624474 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="95fab54f-52bc-4bff-8c9b-819cff12d9b9" containerName="dnsmasq-dns" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624496 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f5deb8-7a92-4b45-b818-8fa95241290b" containerName="registry-server" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.624512 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="877b3463-e822-4f57-b9f2-afbcbda4a044" containerName="registry-server" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.625360 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d52e-account-create-update-d8ngl" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.627690 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.632266 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-bkqql"] Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.633847 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bkqql" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.640292 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d52e-account-create-update-d8ngl"] Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.649140 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bkqql"] Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.667228 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxqb\" (UniqueName: \"kubernetes.io/projected/1d59e901-be21-486f-8b33-ea5b9b2a60a7-kube-api-access-lzxqb\") pod \"keystone-d52e-account-create-update-d8ngl\" (UID: \"1d59e901-be21-486f-8b33-ea5b9b2a60a7\") " pod="openstack/keystone-d52e-account-create-update-d8ngl" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.667296 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d59e901-be21-486f-8b33-ea5b9b2a60a7-operator-scripts\") pod \"keystone-d52e-account-create-update-d8ngl\" (UID: \"1d59e901-be21-486f-8b33-ea5b9b2a60a7\") " pod="openstack/keystone-d52e-account-create-update-d8ngl" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.667447 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnnkq\" (UniqueName: \"kubernetes.io/projected/6d780f1f-9761-4b5a-a706-18d23110b336-kube-api-access-wnnkq\") pod \"keystone-db-create-bkqql\" (UID: \"6d780f1f-9761-4b5a-a706-18d23110b336\") " pod="openstack/keystone-db-create-bkqql" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.667515 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d780f1f-9761-4b5a-a706-18d23110b336-operator-scripts\") pod \"keystone-db-create-bkqql\" (UID: \"6d780f1f-9761-4b5a-a706-18d23110b336\") " pod="openstack/keystone-db-create-bkqql" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.769115 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzxqb\" (UniqueName: \"kubernetes.io/projected/1d59e901-be21-486f-8b33-ea5b9b2a60a7-kube-api-access-lzxqb\") pod \"keystone-d52e-account-create-update-d8ngl\" (UID: \"1d59e901-be21-486f-8b33-ea5b9b2a60a7\") " pod="openstack/keystone-d52e-account-create-update-d8ngl" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.769200 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d59e901-be21-486f-8b33-ea5b9b2a60a7-operator-scripts\") pod \"keystone-d52e-account-create-update-d8ngl\" (UID: \"1d59e901-be21-486f-8b33-ea5b9b2a60a7\") " pod="openstack/keystone-d52e-account-create-update-d8ngl" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.769362 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnnkq\" (UniqueName: \"kubernetes.io/projected/6d780f1f-9761-4b5a-a706-18d23110b336-kube-api-access-wnnkq\") pod \"keystone-db-create-bkqql\" (UID: \"6d780f1f-9761-4b5a-a706-18d23110b336\") " pod="openstack/keystone-db-create-bkqql" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.769400 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d780f1f-9761-4b5a-a706-18d23110b336-operator-scripts\") pod \"keystone-db-create-bkqql\" (UID: \"6d780f1f-9761-4b5a-a706-18d23110b336\") " pod="openstack/keystone-db-create-bkqql" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.770195 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d59e901-be21-486f-8b33-ea5b9b2a60a7-operator-scripts\") pod \"keystone-d52e-account-create-update-d8ngl\" (UID: \"1d59e901-be21-486f-8b33-ea5b9b2a60a7\") " pod="openstack/keystone-d52e-account-create-update-d8ngl" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.770298 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d780f1f-9761-4b5a-a706-18d23110b336-operator-scripts\") pod \"keystone-db-create-bkqql\" (UID: \"6d780f1f-9761-4b5a-a706-18d23110b336\") " pod="openstack/keystone-db-create-bkqql" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.795899 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnnkq\" (UniqueName: \"kubernetes.io/projected/6d780f1f-9761-4b5a-a706-18d23110b336-kube-api-access-wnnkq\") pod \"keystone-db-create-bkqql\" (UID: \"6d780f1f-9761-4b5a-a706-18d23110b336\") " pod="openstack/keystone-db-create-bkqql" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.806881 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzxqb\" (UniqueName: \"kubernetes.io/projected/1d59e901-be21-486f-8b33-ea5b9b2a60a7-kube-api-access-lzxqb\") pod \"keystone-d52e-account-create-update-d8ngl\" (UID: \"1d59e901-be21-486f-8b33-ea5b9b2a60a7\") " pod="openstack/keystone-d52e-account-create-update-d8ngl" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.809974 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-jz62q"] Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.812064 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jz62q" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.831844 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-jz62q"] Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.871878 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-operator-scripts\") pod \"placement-db-create-jz62q\" (UID: \"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9\") " pod="openstack/placement-db-create-jz62q" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.871945 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnnts\" (UniqueName: \"kubernetes.io/projected/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-kube-api-access-xnnts\") pod \"placement-db-create-jz62q\" (UID: \"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9\") " pod="openstack/placement-db-create-jz62q" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.897592 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-1146-account-create-update-8p79d"] Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.898583 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1146-account-create-update-8p79d" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.903627 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.910570 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1146-account-create-update-8p79d"] Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.955875 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d52e-account-create-update-d8ngl" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.966667 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bkqql" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.974151 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-operator-scripts\") pod \"placement-1146-account-create-update-8p79d\" (UID: \"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0\") " pod="openstack/placement-1146-account-create-update-8p79d" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.974637 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm564\" (UniqueName: \"kubernetes.io/projected/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-kube-api-access-bm564\") pod \"placement-1146-account-create-update-8p79d\" (UID: \"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0\") " pod="openstack/placement-1146-account-create-update-8p79d" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.974745 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-operator-scripts\") pod \"placement-db-create-jz62q\" (UID: \"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9\") " pod="openstack/placement-db-create-jz62q" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.974798 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnnts\" (UniqueName: \"kubernetes.io/projected/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-kube-api-access-xnnts\") pod \"placement-db-create-jz62q\" (UID: \"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9\") " pod="openstack/placement-db-create-jz62q" Dec 11 10:11:06 crc kubenswrapper[4746]: I1211 10:11:06.976400 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-operator-scripts\") pod \"placement-db-create-jz62q\" (UID: \"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9\") " pod="openstack/placement-db-create-jz62q" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.010274 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnnts\" (UniqueName: \"kubernetes.io/projected/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-kube-api-access-xnnts\") pod \"placement-db-create-jz62q\" (UID: \"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9\") " pod="openstack/placement-db-create-jz62q" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.076429 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jx9sp"] Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.077246 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm564\" (UniqueName: \"kubernetes.io/projected/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-kube-api-access-bm564\") pod \"placement-1146-account-create-update-8p79d\" (UID: \"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0\") " pod="openstack/placement-1146-account-create-update-8p79d" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.077370 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-operator-scripts\") pod \"placement-1146-account-create-update-8p79d\" (UID: \"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0\") " pod="openstack/placement-1146-account-create-update-8p79d" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.078990 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-operator-scripts\") pod \"placement-1146-account-create-update-8p79d\" (UID: \"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0\") " pod="openstack/placement-1146-account-create-update-8p79d" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.091614 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jx9sp" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.106779 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm564\" (UniqueName: \"kubernetes.io/projected/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-kube-api-access-bm564\") pod \"placement-1146-account-create-update-8p79d\" (UID: \"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0\") " pod="openstack/placement-1146-account-create-update-8p79d" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.107667 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jx9sp"] Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.184066 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b20489d-e037-4fdf-83e6-aeb450aff0f8-operator-scripts\") pod \"glance-db-create-jx9sp\" (UID: \"0b20489d-e037-4fdf-83e6-aeb450aff0f8\") " pod="openstack/glance-db-create-jx9sp" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.184134 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kt74\" (UniqueName: \"kubernetes.io/projected/0b20489d-e037-4fdf-83e6-aeb450aff0f8-kube-api-access-9kt74\") pod \"glance-db-create-jx9sp\" (UID: \"0b20489d-e037-4fdf-83e6-aeb450aff0f8\") " pod="openstack/glance-db-create-jx9sp" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.199024 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jz62q" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.214100 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-3fbf-account-create-update-njwlh"] Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.215198 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3fbf-account-create-update-njwlh" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.221126 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.225765 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1146-account-create-update-8p79d" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.240715 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3fbf-account-create-update-njwlh"] Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.285858 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc7gp\" (UniqueName: \"kubernetes.io/projected/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-kube-api-access-qc7gp\") pod \"glance-3fbf-account-create-update-njwlh\" (UID: \"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e\") " pod="openstack/glance-3fbf-account-create-update-njwlh" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.285977 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b20489d-e037-4fdf-83e6-aeb450aff0f8-operator-scripts\") pod \"glance-db-create-jx9sp\" (UID: \"0b20489d-e037-4fdf-83e6-aeb450aff0f8\") " pod="openstack/glance-db-create-jx9sp" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.286036 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kt74\" (UniqueName: \"kubernetes.io/projected/0b20489d-e037-4fdf-83e6-aeb450aff0f8-kube-api-access-9kt74\") pod \"glance-db-create-jx9sp\" (UID: \"0b20489d-e037-4fdf-83e6-aeb450aff0f8\") " pod="openstack/glance-db-create-jx9sp" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.286147 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-operator-scripts\") pod \"glance-3fbf-account-create-update-njwlh\" (UID: \"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e\") " pod="openstack/glance-3fbf-account-create-update-njwlh" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.286701 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b20489d-e037-4fdf-83e6-aeb450aff0f8-operator-scripts\") pod \"glance-db-create-jx9sp\" (UID: \"0b20489d-e037-4fdf-83e6-aeb450aff0f8\") " pod="openstack/glance-db-create-jx9sp" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.304688 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kt74\" (UniqueName: \"kubernetes.io/projected/0b20489d-e037-4fdf-83e6-aeb450aff0f8-kube-api-access-9kt74\") pod \"glance-db-create-jx9sp\" (UID: \"0b20489d-e037-4fdf-83e6-aeb450aff0f8\") " pod="openstack/glance-db-create-jx9sp" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.391014 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc7gp\" (UniqueName: \"kubernetes.io/projected/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-kube-api-access-qc7gp\") pod \"glance-3fbf-account-create-update-njwlh\" (UID: \"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e\") " pod="openstack/glance-3fbf-account-create-update-njwlh" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.392061 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-operator-scripts\") pod \"glance-3fbf-account-create-update-njwlh\" (UID: \"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e\") " pod="openstack/glance-3fbf-account-create-update-njwlh" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.391238 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-operator-scripts\") pod \"glance-3fbf-account-create-update-njwlh\" (UID: \"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e\") " pod="openstack/glance-3fbf-account-create-update-njwlh" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.408263 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc7gp\" (UniqueName: \"kubernetes.io/projected/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-kube-api-access-qc7gp\") pod \"glance-3fbf-account-create-update-njwlh\" (UID: \"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e\") " pod="openstack/glance-3fbf-account-create-update-njwlh" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.471444 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jx9sp" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.546469 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3fbf-account-create-update-njwlh" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.565487 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-bkqql"] Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.730690 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-d52e-account-create-update-d8ngl"] Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.871811 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-1146-account-create-update-8p79d"] Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.947333 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 11 10:11:07 crc kubenswrapper[4746]: I1211 10:11:07.982631 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-jz62q"] Dec 11 10:11:08 crc kubenswrapper[4746]: I1211 10:11:08.064819 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-3fbf-account-create-update-njwlh"] Dec 11 10:11:08 crc kubenswrapper[4746]: I1211 10:11:08.132822 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jx9sp"] Dec 11 10:11:08 crc kubenswrapper[4746]: I1211 10:11:08.155104 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d52e-account-create-update-d8ngl" event={"ID":"1d59e901-be21-486f-8b33-ea5b9b2a60a7","Type":"ContainerStarted","Data":"138a3a7c88e7c1d37890273bd1e00644574472d4b5360791af4e370c7dbdfdae"} Dec 11 10:11:08 crc kubenswrapper[4746]: I1211 10:11:08.160104 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3fbf-account-create-update-njwlh" event={"ID":"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e","Type":"ContainerStarted","Data":"f31523a355da7e79f8e28b50c61411be47cc59a44b40c6520b8b76c5857ef56e"} Dec 11 10:11:08 crc kubenswrapper[4746]: I1211 10:11:08.162712 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1146-account-create-update-8p79d" event={"ID":"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0","Type":"ContainerStarted","Data":"23bfb4fc83bc482df87591fa1be08a60a01bce03cd4ccbd58a3f9efa696fc4cd"} Dec 11 10:11:08 crc kubenswrapper[4746]: I1211 10:11:08.164406 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bkqql" event={"ID":"6d780f1f-9761-4b5a-a706-18d23110b336","Type":"ContainerStarted","Data":"d3499e475a5c574f468ec2ef7739e33c7202308e5b4718b4c21ff1fdaa27219a"} Dec 11 10:11:08 crc kubenswrapper[4746]: I1211 10:11:08.164438 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bkqql" event={"ID":"6d780f1f-9761-4b5a-a706-18d23110b336","Type":"ContainerStarted","Data":"d373ba73f347f59dc96560da96a9109b76f30badd7562d66daabc5868dfcc7bd"} Dec 11 10:11:08 crc kubenswrapper[4746]: I1211 10:11:08.166663 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jz62q" event={"ID":"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9","Type":"ContainerStarted","Data":"e0ddfe480413f278822a573f50d6344fe2902d877852f8fa074db10b0a056fc1"} Dec 11 10:11:08 crc kubenswrapper[4746]: I1211 10:11:08.169262 4746 generic.go:334] "Generic (PLEG): container finished" podID="94f9d09a-c638-4da1-a6e0-3337621da894" containerID="6d1cfd8f16470f3a55ad3855405cca6ade3c37e6315f16c99f162cadd53ae916" exitCode=0 Dec 11 10:11:08 crc kubenswrapper[4746]: I1211 10:11:08.169351 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wrnch" event={"ID":"94f9d09a-c638-4da1-a6e0-3337621da894","Type":"ContainerDied","Data":"6d1cfd8f16470f3a55ad3855405cca6ade3c37e6315f16c99f162cadd53ae916"} Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.177654 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9fe9f17-da32-4064-b8fe-c2ae0c5107b0" containerID="0bca09d3c902bbb1fddaebd16a860b09bb6e83ebe0b97d19ce291254d7bb56e9" exitCode=0 Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.177781 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1146-account-create-update-8p79d" event={"ID":"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0","Type":"ContainerDied","Data":"0bca09d3c902bbb1fddaebd16a860b09bb6e83ebe0b97d19ce291254d7bb56e9"} Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.180104 4746 generic.go:334] "Generic (PLEG): container finished" podID="6d780f1f-9761-4b5a-a706-18d23110b336" containerID="d3499e475a5c574f468ec2ef7739e33c7202308e5b4718b4c21ff1fdaa27219a" exitCode=0 Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.180229 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bkqql" event={"ID":"6d780f1f-9761-4b5a-a706-18d23110b336","Type":"ContainerDied","Data":"d3499e475a5c574f468ec2ef7739e33c7202308e5b4718b4c21ff1fdaa27219a"} Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.183775 4746 generic.go:334] "Generic (PLEG): container finished" podID="d6b3935c-dc1b-4bfd-95d3-de24c6139ae9" containerID="81957127c383d6bb9a4f83f61040800ef86421c667caf2adae97b2747eb08bcd" exitCode=0 Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.183845 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jz62q" event={"ID":"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9","Type":"ContainerDied","Data":"81957127c383d6bb9a4f83f61040800ef86421c667caf2adae97b2747eb08bcd"} Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.185609 4746 generic.go:334] "Generic (PLEG): container finished" podID="1d59e901-be21-486f-8b33-ea5b9b2a60a7" containerID="2d49886d6c26cd542a6af8155ca894d9926d4b6fd88b774b34f185235c440376" exitCode=0 Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.185667 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d52e-account-create-update-d8ngl" event={"ID":"1d59e901-be21-486f-8b33-ea5b9b2a60a7","Type":"ContainerDied","Data":"2d49886d6c26cd542a6af8155ca894d9926d4b6fd88b774b34f185235c440376"} Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.187281 4746 generic.go:334] "Generic (PLEG): container finished" podID="0b20489d-e037-4fdf-83e6-aeb450aff0f8" containerID="0c3014c14de4958b5bea89e84f51b7bdd53e8717a96c2022ed489a9a12237335" exitCode=0 Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.187320 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jx9sp" event={"ID":"0b20489d-e037-4fdf-83e6-aeb450aff0f8","Type":"ContainerDied","Data":"0c3014c14de4958b5bea89e84f51b7bdd53e8717a96c2022ed489a9a12237335"} Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.187336 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jx9sp" event={"ID":"0b20489d-e037-4fdf-83e6-aeb450aff0f8","Type":"ContainerStarted","Data":"1fd8cbf85b82deb762e61006efed764684f06e36a9531a02c5e0a2b62309164c"} Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.199458 4746 generic.go:334] "Generic (PLEG): container finished" podID="2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e" containerID="2d2100e23f49ffdb0eb2fafb20079db05aa49df66d631b7c3403f1a8127f7f59" exitCode=0 Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.199499 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3fbf-account-create-update-njwlh" event={"ID":"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e","Type":"ContainerDied","Data":"2d2100e23f49ffdb0eb2fafb20079db05aa49df66d631b7c3403f1a8127f7f59"} Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.481122 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bkqql" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.555267 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnnkq\" (UniqueName: \"kubernetes.io/projected/6d780f1f-9761-4b5a-a706-18d23110b336-kube-api-access-wnnkq\") pod \"6d780f1f-9761-4b5a-a706-18d23110b336\" (UID: \"6d780f1f-9761-4b5a-a706-18d23110b336\") " Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.555371 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d780f1f-9761-4b5a-a706-18d23110b336-operator-scripts\") pod \"6d780f1f-9761-4b5a-a706-18d23110b336\" (UID: \"6d780f1f-9761-4b5a-a706-18d23110b336\") " Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.556695 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d780f1f-9761-4b5a-a706-18d23110b336-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d780f1f-9761-4b5a-a706-18d23110b336" (UID: "6d780f1f-9761-4b5a-a706-18d23110b336"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.563536 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d780f1f-9761-4b5a-a706-18d23110b336-kube-api-access-wnnkq" (OuterVolumeSpecName: "kube-api-access-wnnkq") pod "6d780f1f-9761-4b5a-a706-18d23110b336" (UID: "6d780f1f-9761-4b5a-a706-18d23110b336"). InnerVolumeSpecName "kube-api-access-wnnkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.640382 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.664512 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnnkq\" (UniqueName: \"kubernetes.io/projected/6d780f1f-9761-4b5a-a706-18d23110b336-kube-api-access-wnnkq\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.664543 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d780f1f-9761-4b5a-a706-18d23110b336-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.766301 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/94f9d09a-c638-4da1-a6e0-3337621da894-etc-swift\") pod \"94f9d09a-c638-4da1-a6e0-3337621da894\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.766745 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-combined-ca-bundle\") pod \"94f9d09a-c638-4da1-a6e0-3337621da894\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.766989 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-dispersionconf\") pod \"94f9d09a-c638-4da1-a6e0-3337621da894\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.767571 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-scripts\") pod \"94f9d09a-c638-4da1-a6e0-3337621da894\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.767748 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-ring-data-devices\") pod \"94f9d09a-c638-4da1-a6e0-3337621da894\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.767177 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94f9d09a-c638-4da1-a6e0-3337621da894-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "94f9d09a-c638-4da1-a6e0-3337621da894" (UID: "94f9d09a-c638-4da1-a6e0-3337621da894"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.768309 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jfqm\" (UniqueName: \"kubernetes.io/projected/94f9d09a-c638-4da1-a6e0-3337621da894-kube-api-access-2jfqm\") pod \"94f9d09a-c638-4da1-a6e0-3337621da894\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.768532 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-swiftconf\") pod \"94f9d09a-c638-4da1-a6e0-3337621da894\" (UID: \"94f9d09a-c638-4da1-a6e0-3337621da894\") " Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.768621 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "94f9d09a-c638-4da1-a6e0-3337621da894" (UID: "94f9d09a-c638-4da1-a6e0-3337621da894"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.769450 4746 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.769672 4746 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/94f9d09a-c638-4da1-a6e0-3337621da894-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.773498 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f9d09a-c638-4da1-a6e0-3337621da894-kube-api-access-2jfqm" (OuterVolumeSpecName: "kube-api-access-2jfqm") pod "94f9d09a-c638-4da1-a6e0-3337621da894" (UID: "94f9d09a-c638-4da1-a6e0-3337621da894"). InnerVolumeSpecName "kube-api-access-2jfqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.788527 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "94f9d09a-c638-4da1-a6e0-3337621da894" (UID: "94f9d09a-c638-4da1-a6e0-3337621da894"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.793764 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-scripts" (OuterVolumeSpecName: "scripts") pod "94f9d09a-c638-4da1-a6e0-3337621da894" (UID: "94f9d09a-c638-4da1-a6e0-3337621da894"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.799905 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "94f9d09a-c638-4da1-a6e0-3337621da894" (UID: "94f9d09a-c638-4da1-a6e0-3337621da894"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.815655 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94f9d09a-c638-4da1-a6e0-3337621da894" (UID: "94f9d09a-c638-4da1-a6e0-3337621da894"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.871967 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jfqm\" (UniqueName: \"kubernetes.io/projected/94f9d09a-c638-4da1-a6e0-3337621da894-kube-api-access-2jfqm\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.872004 4746 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.872015 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.872023 4746 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/94f9d09a-c638-4da1-a6e0-3337621da894-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:09 crc kubenswrapper[4746]: I1211 10:11:09.872034 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94f9d09a-c638-4da1-a6e0-3337621da894-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.216672 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-bkqql" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.216913 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-bkqql" event={"ID":"6d780f1f-9761-4b5a-a706-18d23110b336","Type":"ContainerDied","Data":"d373ba73f347f59dc96560da96a9109b76f30badd7562d66daabc5868dfcc7bd"} Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.216967 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d373ba73f347f59dc96560da96a9109b76f30badd7562d66daabc5868dfcc7bd" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.221761 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wrnch" event={"ID":"94f9d09a-c638-4da1-a6e0-3337621da894","Type":"ContainerDied","Data":"671d1f224673bb0aae74a2644160c37badcb7229d1ee945cc4ee6f74a16253c1"} Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.221828 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="671d1f224673bb0aae74a2644160c37badcb7229d1ee945cc4ee6f74a16253c1" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.221933 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wrnch" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.586623 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3fbf-account-create-update-njwlh" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.687926 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qc7gp\" (UniqueName: \"kubernetes.io/projected/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-kube-api-access-qc7gp\") pod \"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e\" (UID: \"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e\") " Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.688002 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-operator-scripts\") pod \"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e\" (UID: \"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e\") " Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.690149 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e" (UID: "2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.697524 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-kube-api-access-qc7gp" (OuterVolumeSpecName: "kube-api-access-qc7gp") pod "2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e" (UID: "2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e"). InnerVolumeSpecName "kube-api-access-qc7gp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.769992 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jz62q" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.790417 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qc7gp\" (UniqueName: \"kubernetes.io/projected/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-kube-api-access-qc7gp\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.790450 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.791025 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1146-account-create-update-8p79d" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.864611 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d52e-account-create-update-d8ngl" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.873320 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jx9sp" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.891647 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-operator-scripts\") pod \"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0\" (UID: \"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0\") " Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.891735 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-operator-scripts\") pod \"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9\" (UID: \"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9\") " Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.891801 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm564\" (UniqueName: \"kubernetes.io/projected/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-kube-api-access-bm564\") pod \"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0\" (UID: \"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0\") " Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.891885 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnnts\" (UniqueName: \"kubernetes.io/projected/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-kube-api-access-xnnts\") pod \"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9\" (UID: \"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9\") " Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.893656 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9fe9f17-da32-4064-b8fe-c2ae0c5107b0" (UID: "a9fe9f17-da32-4064-b8fe-c2ae0c5107b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.894208 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d6b3935c-dc1b-4bfd-95d3-de24c6139ae9" (UID: "d6b3935c-dc1b-4bfd-95d3-de24c6139ae9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.900343 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-kube-api-access-bm564" (OuterVolumeSpecName: "kube-api-access-bm564") pod "a9fe9f17-da32-4064-b8fe-c2ae0c5107b0" (UID: "a9fe9f17-da32-4064-b8fe-c2ae0c5107b0"). InnerVolumeSpecName "kube-api-access-bm564". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.900954 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-kube-api-access-xnnts" (OuterVolumeSpecName: "kube-api-access-xnnts") pod "d6b3935c-dc1b-4bfd-95d3-de24c6139ae9" (UID: "d6b3935c-dc1b-4bfd-95d3-de24c6139ae9"). InnerVolumeSpecName "kube-api-access-xnnts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.993829 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d59e901-be21-486f-8b33-ea5b9b2a60a7-operator-scripts\") pod \"1d59e901-be21-486f-8b33-ea5b9b2a60a7\" (UID: \"1d59e901-be21-486f-8b33-ea5b9b2a60a7\") " Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.993893 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b20489d-e037-4fdf-83e6-aeb450aff0f8-operator-scripts\") pod \"0b20489d-e037-4fdf-83e6-aeb450aff0f8\" (UID: \"0b20489d-e037-4fdf-83e6-aeb450aff0f8\") " Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.993957 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9kt74\" (UniqueName: \"kubernetes.io/projected/0b20489d-e037-4fdf-83e6-aeb450aff0f8-kube-api-access-9kt74\") pod \"0b20489d-e037-4fdf-83e6-aeb450aff0f8\" (UID: \"0b20489d-e037-4fdf-83e6-aeb450aff0f8\") " Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.993989 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzxqb\" (UniqueName: \"kubernetes.io/projected/1d59e901-be21-486f-8b33-ea5b9b2a60a7-kube-api-access-lzxqb\") pod \"1d59e901-be21-486f-8b33-ea5b9b2a60a7\" (UID: \"1d59e901-be21-486f-8b33-ea5b9b2a60a7\") " Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.994435 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm564\" (UniqueName: \"kubernetes.io/projected/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-kube-api-access-bm564\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.994460 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnnts\" (UniqueName: \"kubernetes.io/projected/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-kube-api-access-xnnts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.994475 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.994489 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.994835 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b20489d-e037-4fdf-83e6-aeb450aff0f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0b20489d-e037-4fdf-83e6-aeb450aff0f8" (UID: "0b20489d-e037-4fdf-83e6-aeb450aff0f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:10 crc kubenswrapper[4746]: I1211 10:11:10.995411 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1d59e901-be21-486f-8b33-ea5b9b2a60a7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1d59e901-be21-486f-8b33-ea5b9b2a60a7" (UID: "1d59e901-be21-486f-8b33-ea5b9b2a60a7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.001585 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b20489d-e037-4fdf-83e6-aeb450aff0f8-kube-api-access-9kt74" (OuterVolumeSpecName: "kube-api-access-9kt74") pod "0b20489d-e037-4fdf-83e6-aeb450aff0f8" (UID: "0b20489d-e037-4fdf-83e6-aeb450aff0f8"). InnerVolumeSpecName "kube-api-access-9kt74". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.003195 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d59e901-be21-486f-8b33-ea5b9b2a60a7-kube-api-access-lzxqb" (OuterVolumeSpecName: "kube-api-access-lzxqb") pod "1d59e901-be21-486f-8b33-ea5b9b2a60a7" (UID: "1d59e901-be21-486f-8b33-ea5b9b2a60a7"). InnerVolumeSpecName "kube-api-access-lzxqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.096403 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1d59e901-be21-486f-8b33-ea5b9b2a60a7-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.096451 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b20489d-e037-4fdf-83e6-aeb450aff0f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.096466 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9kt74\" (UniqueName: \"kubernetes.io/projected/0b20489d-e037-4fdf-83e6-aeb450aff0f8-kube-api-access-9kt74\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.096480 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzxqb\" (UniqueName: \"kubernetes.io/projected/1d59e901-be21-486f-8b33-ea5b9b2a60a7-kube-api-access-lzxqb\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.235131 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-jz62q" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.235119 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-jz62q" event={"ID":"d6b3935c-dc1b-4bfd-95d3-de24c6139ae9","Type":"ContainerDied","Data":"e0ddfe480413f278822a573f50d6344fe2902d877852f8fa074db10b0a056fc1"} Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.235680 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0ddfe480413f278822a573f50d6344fe2902d877852f8fa074db10b0a056fc1" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.237361 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-d52e-account-create-update-d8ngl" event={"ID":"1d59e901-be21-486f-8b33-ea5b9b2a60a7","Type":"ContainerDied","Data":"138a3a7c88e7c1d37890273bd1e00644574472d4b5360791af4e370c7dbdfdae"} Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.237404 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="138a3a7c88e7c1d37890273bd1e00644574472d4b5360791af4e370c7dbdfdae" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.237455 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-d52e-account-create-update-d8ngl" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.243646 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jx9sp" event={"ID":"0b20489d-e037-4fdf-83e6-aeb450aff0f8","Type":"ContainerDied","Data":"1fd8cbf85b82deb762e61006efed764684f06e36a9531a02c5e0a2b62309164c"} Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.243688 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fd8cbf85b82deb762e61006efed764684f06e36a9531a02c5e0a2b62309164c" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.243757 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jx9sp" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.246479 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-3fbf-account-create-update-njwlh" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.246600 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-3fbf-account-create-update-njwlh" event={"ID":"2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e","Type":"ContainerDied","Data":"f31523a355da7e79f8e28b50c61411be47cc59a44b40c6520b8b76c5857ef56e"} Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.246643 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f31523a355da7e79f8e28b50c61411be47cc59a44b40c6520b8b76c5857ef56e" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.249098 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-1146-account-create-update-8p79d" event={"ID":"a9fe9f17-da32-4064-b8fe-c2ae0c5107b0","Type":"ContainerDied","Data":"23bfb4fc83bc482df87591fa1be08a60a01bce03cd4ccbd58a3f9efa696fc4cd"} Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.249147 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23bfb4fc83bc482df87591fa1be08a60a01bce03cd4ccbd58a3f9efa696fc4cd" Dec 11 10:11:11 crc kubenswrapper[4746]: I1211 10:11:11.249211 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-1146-account-create-update-8p79d" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.369134 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-fh8cg"] Dec 11 10:11:12 crc kubenswrapper[4746]: E1211 10:11:12.369821 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e" containerName="mariadb-account-create-update" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.369835 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e" containerName="mariadb-account-create-update" Dec 11 10:11:12 crc kubenswrapper[4746]: E1211 10:11:12.369858 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d780f1f-9761-4b5a-a706-18d23110b336" containerName="mariadb-database-create" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.369866 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d780f1f-9761-4b5a-a706-18d23110b336" containerName="mariadb-database-create" Dec 11 10:11:12 crc kubenswrapper[4746]: E1211 10:11:12.369884 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b20489d-e037-4fdf-83e6-aeb450aff0f8" containerName="mariadb-database-create" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.369894 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b20489d-e037-4fdf-83e6-aeb450aff0f8" containerName="mariadb-database-create" Dec 11 10:11:12 crc kubenswrapper[4746]: E1211 10:11:12.369911 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f9d09a-c638-4da1-a6e0-3337621da894" containerName="swift-ring-rebalance" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.369919 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f9d09a-c638-4da1-a6e0-3337621da894" containerName="swift-ring-rebalance" Dec 11 10:11:12 crc kubenswrapper[4746]: E1211 10:11:12.369933 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d59e901-be21-486f-8b33-ea5b9b2a60a7" containerName="mariadb-account-create-update" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.369941 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d59e901-be21-486f-8b33-ea5b9b2a60a7" containerName="mariadb-account-create-update" Dec 11 10:11:12 crc kubenswrapper[4746]: E1211 10:11:12.369958 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fe9f17-da32-4064-b8fe-c2ae0c5107b0" containerName="mariadb-account-create-update" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.369966 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fe9f17-da32-4064-b8fe-c2ae0c5107b0" containerName="mariadb-account-create-update" Dec 11 10:11:12 crc kubenswrapper[4746]: E1211 10:11:12.369981 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b3935c-dc1b-4bfd-95d3-de24c6139ae9" containerName="mariadb-database-create" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.369989 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b3935c-dc1b-4bfd-95d3-de24c6139ae9" containerName="mariadb-database-create" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.370236 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b3935c-dc1b-4bfd-95d3-de24c6139ae9" containerName="mariadb-database-create" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.370279 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f9d09a-c638-4da1-a6e0-3337621da894" containerName="swift-ring-rebalance" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.370312 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d780f1f-9761-4b5a-a706-18d23110b336" containerName="mariadb-database-create" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.370323 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9fe9f17-da32-4064-b8fe-c2ae0c5107b0" containerName="mariadb-account-create-update" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.370349 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d59e901-be21-486f-8b33-ea5b9b2a60a7" containerName="mariadb-account-create-update" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.370383 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e" containerName="mariadb-account-create-update" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.370398 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b20489d-e037-4fdf-83e6-aeb450aff0f8" containerName="mariadb-database-create" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.371117 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.377311 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.377365 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7h7bv" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.399148 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fh8cg"] Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.421211 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-db-sync-config-data\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.421278 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-combined-ca-bundle\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.421412 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-config-data\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.421476 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf2st\" (UniqueName: \"kubernetes.io/projected/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-kube-api-access-nf2st\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.522630 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-db-sync-config-data\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.522719 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-combined-ca-bundle\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.523032 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-config-data\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.523710 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf2st\" (UniqueName: \"kubernetes.io/projected/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-kube-api-access-nf2st\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.528057 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-config-data\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.528784 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-combined-ca-bundle\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.540322 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-db-sync-config-data\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.540757 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf2st\" (UniqueName: \"kubernetes.io/projected/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-kube-api-access-nf2st\") pod \"glance-db-sync-fh8cg\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.690254 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fh8cg" Dec 11 10:11:12 crc kubenswrapper[4746]: I1211 10:11:12.729612 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-242vs" podUID="31760b52-7caf-49dd-bf1e-2d2f88b000a2" containerName="ovn-controller" probeResult="failure" output=< Dec 11 10:11:12 crc kubenswrapper[4746]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 11 10:11:12 crc kubenswrapper[4746]: > Dec 11 10:11:13 crc kubenswrapper[4746]: I1211 10:11:13.249811 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-fh8cg"] Dec 11 10:11:13 crc kubenswrapper[4746]: I1211 10:11:13.297459 4746 generic.go:334] "Generic (PLEG): container finished" podID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" containerID="c2000b97a92c70f413a797f74ad7af2d1ad478ddb51b4ad876910c803d0020d1" exitCode=0 Dec 11 10:11:13 crc kubenswrapper[4746]: I1211 10:11:13.297532 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6b37a306-a93c-4cb2-9a15-888df45f0ca7","Type":"ContainerDied","Data":"c2000b97a92c70f413a797f74ad7af2d1ad478ddb51b4ad876910c803d0020d1"} Dec 11 10:11:13 crc kubenswrapper[4746]: I1211 10:11:13.298934 4746 generic.go:334] "Generic (PLEG): container finished" podID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" containerID="52784d2bd3120e89e64f8e4d1ef55a5e083b21b2ce171165f5bf21bf0df40c74" exitCode=0 Dec 11 10:11:13 crc kubenswrapper[4746]: I1211 10:11:13.298986 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c896f2d4-ac49-431e-b8c5-eda758cfa7cd","Type":"ContainerDied","Data":"52784d2bd3120e89e64f8e4d1ef55a5e083b21b2ce171165f5bf21bf0df40c74"} Dec 11 10:11:13 crc kubenswrapper[4746]: I1211 10:11:13.301005 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fh8cg" event={"ID":"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983","Type":"ContainerStarted","Data":"857dfbe1b5815b24420fe24f82299c184b6fc3f6c7a37bf9f7c563bfd720de59"} Dec 11 10:11:13 crc kubenswrapper[4746]: I1211 10:11:13.598000 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:11:13 crc kubenswrapper[4746]: I1211 10:11:13.651447 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:11:13 crc kubenswrapper[4746]: I1211 10:11:13.831536 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m5fpx"] Dec 11 10:11:14 crc kubenswrapper[4746]: I1211 10:11:14.315593 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6b37a306-a93c-4cb2-9a15-888df45f0ca7","Type":"ContainerStarted","Data":"94468ffebda107fe33ff04876efea461aafc7a246c310c3e28e2f6fe4862c01f"} Dec 11 10:11:14 crc kubenswrapper[4746]: I1211 10:11:14.316011 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 11 10:11:14 crc kubenswrapper[4746]: I1211 10:11:14.319281 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c896f2d4-ac49-431e-b8c5-eda758cfa7cd","Type":"ContainerStarted","Data":"c916edefa54aed3bb5e9ed0be01019855420ec58cdee4f2e41262d107563ede0"} Dec 11 10:11:14 crc kubenswrapper[4746]: I1211 10:11:14.319633 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:11:14 crc kubenswrapper[4746]: I1211 10:11:14.346280 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.794447606 podStartE2EDuration="1m22.346258634s" podCreationTimestamp="2025-12-11 10:09:52 +0000 UTC" firstStartedPulling="2025-12-11 10:09:54.412087492 +0000 UTC m=+967.271950805" lastFinishedPulling="2025-12-11 10:10:38.96389851 +0000 UTC m=+1011.823761833" observedRunningTime="2025-12-11 10:11:14.342189064 +0000 UTC m=+1047.202052387" watchObservedRunningTime="2025-12-11 10:11:14.346258634 +0000 UTC m=+1047.206121947" Dec 11 10:11:14 crc kubenswrapper[4746]: I1211 10:11:14.378266 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.702337554 podStartE2EDuration="1m22.378245904s" podCreationTimestamp="2025-12-11 10:09:52 +0000 UTC" firstStartedPulling="2025-12-11 10:09:55.342399612 +0000 UTC m=+968.202262925" lastFinishedPulling="2025-12-11 10:10:39.018307952 +0000 UTC m=+1011.878171275" observedRunningTime="2025-12-11 10:11:14.373373503 +0000 UTC m=+1047.233236856" watchObservedRunningTime="2025-12-11 10:11:14.378245904 +0000 UTC m=+1047.238109217" Dec 11 10:11:15 crc kubenswrapper[4746]: I1211 10:11:15.333359 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m5fpx" podUID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerName="registry-server" containerID="cri-o://79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55" gracePeriod=2 Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.299687 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.344856 4746 generic.go:334] "Generic (PLEG): container finished" podID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerID="79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55" exitCode=0 Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.344916 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5fpx" event={"ID":"69a05db9-bdf5-4141-a267-932c862a4ca3","Type":"ContainerDied","Data":"79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55"} Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.344945 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m5fpx" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.345008 4746 scope.go:117] "RemoveContainer" containerID="79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.344995 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m5fpx" event={"ID":"69a05db9-bdf5-4141-a267-932c862a4ca3","Type":"ContainerDied","Data":"78adc86475dd2568dcb0ac3aea9842586835d66530c2419d4dda1f8ef141f672"} Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.371782 4746 scope.go:117] "RemoveContainer" containerID="4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.384402 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-catalog-content\") pod \"69a05db9-bdf5-4141-a267-932c862a4ca3\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.384479 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-utilities\") pod \"69a05db9-bdf5-4141-a267-932c862a4ca3\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.384557 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76jgd\" (UniqueName: \"kubernetes.io/projected/69a05db9-bdf5-4141-a267-932c862a4ca3-kube-api-access-76jgd\") pod \"69a05db9-bdf5-4141-a267-932c862a4ca3\" (UID: \"69a05db9-bdf5-4141-a267-932c862a4ca3\") " Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.386099 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-utilities" (OuterVolumeSpecName: "utilities") pod "69a05db9-bdf5-4141-a267-932c862a4ca3" (UID: "69a05db9-bdf5-4141-a267-932c862a4ca3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.391409 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69a05db9-bdf5-4141-a267-932c862a4ca3-kube-api-access-76jgd" (OuterVolumeSpecName: "kube-api-access-76jgd") pod "69a05db9-bdf5-4141-a267-932c862a4ca3" (UID: "69a05db9-bdf5-4141-a267-932c862a4ca3"). InnerVolumeSpecName "kube-api-access-76jgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.418575 4746 scope.go:117] "RemoveContainer" containerID="28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.461523 4746 scope.go:117] "RemoveContainer" containerID="79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55" Dec 11 10:11:16 crc kubenswrapper[4746]: E1211 10:11:16.461987 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55\": container with ID starting with 79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55 not found: ID does not exist" containerID="79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.462034 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55"} err="failed to get container status \"79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55\": rpc error: code = NotFound desc = could not find container \"79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55\": container with ID starting with 79dbcab2ada8426a0323cfa98bb6cd7e4ee476e9b50e20cd21ae45f15d5d0a55 not found: ID does not exist" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.462080 4746 scope.go:117] "RemoveContainer" containerID="4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3" Dec 11 10:11:16 crc kubenswrapper[4746]: E1211 10:11:16.462374 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3\": container with ID starting with 4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3 not found: ID does not exist" containerID="4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.462404 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3"} err="failed to get container status \"4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3\": rpc error: code = NotFound desc = could not find container \"4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3\": container with ID starting with 4d1bf0db4079d125615a6c63d241b7b6dc5a7c8fc3606fab921d11819f52adb3 not found: ID does not exist" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.462438 4746 scope.go:117] "RemoveContainer" containerID="28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e" Dec 11 10:11:16 crc kubenswrapper[4746]: E1211 10:11:16.462692 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e\": container with ID starting with 28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e not found: ID does not exist" containerID="28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.462718 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e"} err="failed to get container status \"28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e\": rpc error: code = NotFound desc = could not find container \"28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e\": container with ID starting with 28d0fda7ecbc5524ab6a026090ffee41539c0b27e9451690fb9a79d5b1c1b12e not found: ID does not exist" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.486387 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.486432 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76jgd\" (UniqueName: \"kubernetes.io/projected/69a05db9-bdf5-4141-a267-932c862a4ca3-kube-api-access-76jgd\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.506592 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69a05db9-bdf5-4141-a267-932c862a4ca3" (UID: "69a05db9-bdf5-4141-a267-932c862a4ca3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.588542 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69a05db9-bdf5-4141-a267-932c862a4ca3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.678676 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m5fpx"] Dec 11 10:11:16 crc kubenswrapper[4746]: I1211 10:11:16.687649 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m5fpx"] Dec 11 10:11:17 crc kubenswrapper[4746]: I1211 10:11:17.648203 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69a05db9-bdf5-4141-a267-932c862a4ca3" path="/var/lib/kubelet/pods/69a05db9-bdf5-4141-a267-932c862a4ca3/volumes" Dec 11 10:11:17 crc kubenswrapper[4746]: I1211 10:11:17.710244 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-242vs" podUID="31760b52-7caf-49dd-bf1e-2d2f88b000a2" containerName="ovn-controller" probeResult="failure" output=< Dec 11 10:11:17 crc kubenswrapper[4746]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 11 10:11:17 crc kubenswrapper[4746]: > Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.043581 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.066097 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w5jlj" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.285834 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-242vs-config-dlrgc"] Dec 11 10:11:18 crc kubenswrapper[4746]: E1211 10:11:18.286557 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerName="registry-server" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.286572 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerName="registry-server" Dec 11 10:11:18 crc kubenswrapper[4746]: E1211 10:11:18.286583 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerName="extract-content" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.286589 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerName="extract-content" Dec 11 10:11:18 crc kubenswrapper[4746]: E1211 10:11:18.286607 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerName="extract-utilities" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.286615 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerName="extract-utilities" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.286820 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="69a05db9-bdf5-4141-a267-932c862a4ca3" containerName="registry-server" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.287413 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.291142 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.302200 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-242vs-config-dlrgc"] Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.418137 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.418224 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bgvq\" (UniqueName: \"kubernetes.io/projected/e41a9dae-476f-4f96-93c2-b94ba44dcf21-kube-api-access-4bgvq\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.418261 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-scripts\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.418285 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run-ovn\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.418318 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-log-ovn\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.418496 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-additional-scripts\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.520766 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.520866 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bgvq\" (UniqueName: \"kubernetes.io/projected/e41a9dae-476f-4f96-93c2-b94ba44dcf21-kube-api-access-4bgvq\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.520897 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-scripts\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.520921 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run-ovn\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.520946 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-log-ovn\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.520982 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-additional-scripts\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.521175 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.521192 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run-ovn\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.521176 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-log-ovn\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.521935 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-additional-scripts\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.523394 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-scripts\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.547843 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bgvq\" (UniqueName: \"kubernetes.io/projected/e41a9dae-476f-4f96-93c2-b94ba44dcf21-kube-api-access-4bgvq\") pod \"ovn-controller-242vs-config-dlrgc\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:18 crc kubenswrapper[4746]: I1211 10:11:18.615172 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:19 crc kubenswrapper[4746]: I1211 10:11:19.001249 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-242vs-config-dlrgc"] Dec 11 10:11:19 crc kubenswrapper[4746]: I1211 10:11:19.429507 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-242vs-config-dlrgc" event={"ID":"e41a9dae-476f-4f96-93c2-b94ba44dcf21","Type":"ContainerStarted","Data":"f9dda2df02fe8fbb3c8748564c18ac246d96c03c35812c00f43554c7afeb7dc1"} Dec 11 10:11:19 crc kubenswrapper[4746]: I1211 10:11:19.429840 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-242vs-config-dlrgc" event={"ID":"e41a9dae-476f-4f96-93c2-b94ba44dcf21","Type":"ContainerStarted","Data":"c5146d96a9a83c257461e4a5b96b50ccdfbec2719b31843781affe7ff8f2ec57"} Dec 11 10:11:19 crc kubenswrapper[4746]: I1211 10:11:19.468010 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-242vs-config-dlrgc" podStartSLOduration=1.467991355 podStartE2EDuration="1.467991355s" podCreationTimestamp="2025-12-11 10:11:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:11:19.459977641 +0000 UTC m=+1052.319840964" watchObservedRunningTime="2025-12-11 10:11:19.467991355 +0000 UTC m=+1052.327854658" Dec 11 10:11:20 crc kubenswrapper[4746]: I1211 10:11:20.443213 4746 generic.go:334] "Generic (PLEG): container finished" podID="e41a9dae-476f-4f96-93c2-b94ba44dcf21" containerID="f9dda2df02fe8fbb3c8748564c18ac246d96c03c35812c00f43554c7afeb7dc1" exitCode=0 Dec 11 10:11:20 crc kubenswrapper[4746]: I1211 10:11:20.443335 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-242vs-config-dlrgc" event={"ID":"e41a9dae-476f-4f96-93c2-b94ba44dcf21","Type":"ContainerDied","Data":"f9dda2df02fe8fbb3c8748564c18ac246d96c03c35812c00f43554c7afeb7dc1"} Dec 11 10:11:22 crc kubenswrapper[4746]: I1211 10:11:22.307893 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:11:22 crc kubenswrapper[4746]: I1211 10:11:22.318338 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/3df27f8b-76bd-441d-9c3a-2b8bd1f250c7-etc-swift\") pod \"swift-storage-0\" (UID: \"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7\") " pod="openstack/swift-storage-0" Dec 11 10:11:22 crc kubenswrapper[4746]: I1211 10:11:22.590168 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 11 10:11:22 crc kubenswrapper[4746]: I1211 10:11:22.709655 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-242vs" Dec 11 10:11:23 crc kubenswrapper[4746]: I1211 10:11:23.751321 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.086988 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-crmlt"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.088572 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-crmlt" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.102010 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-crmlt"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.241938 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5a612d-f335-4dfc-912b-30247387c806-operator-scripts\") pod \"barbican-db-create-crmlt\" (UID: \"fb5a612d-f335-4dfc-912b-30247387c806\") " pod="openstack/barbican-db-create-crmlt" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.242007 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dgjj\" (UniqueName: \"kubernetes.io/projected/fb5a612d-f335-4dfc-912b-30247387c806-kube-api-access-2dgjj\") pod \"barbican-db-create-crmlt\" (UID: \"fb5a612d-f335-4dfc-912b-30247387c806\") " pod="openstack/barbican-db-create-crmlt" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.257930 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-9a83-account-create-update-5s7gx"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.259537 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9a83-account-create-update-5s7gx" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.269264 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.280485 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-c6862"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.282123 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c6862" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.288441 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9a83-account-create-update-5s7gx"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.300274 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-c6862"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.343960 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4f6x\" (UniqueName: \"kubernetes.io/projected/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-kube-api-access-j4f6x\") pod \"cinder-db-create-c6862\" (UID: \"e61f0916-b247-44ba-bf5c-fcd4e00ecd88\") " pod="openstack/cinder-db-create-c6862" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.344063 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5a612d-f335-4dfc-912b-30247387c806-operator-scripts\") pod \"barbican-db-create-crmlt\" (UID: \"fb5a612d-f335-4dfc-912b-30247387c806\") " pod="openstack/barbican-db-create-crmlt" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.344139 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dgjj\" (UniqueName: \"kubernetes.io/projected/fb5a612d-f335-4dfc-912b-30247387c806-kube-api-access-2dgjj\") pod \"barbican-db-create-crmlt\" (UID: \"fb5a612d-f335-4dfc-912b-30247387c806\") " pod="openstack/barbican-db-create-crmlt" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.344279 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rzrq\" (UniqueName: \"kubernetes.io/projected/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-kube-api-access-5rzrq\") pod \"barbican-9a83-account-create-update-5s7gx\" (UID: \"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0\") " pod="openstack/barbican-9a83-account-create-update-5s7gx" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.344327 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-operator-scripts\") pod \"barbican-9a83-account-create-update-5s7gx\" (UID: \"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0\") " pod="openstack/barbican-9a83-account-create-update-5s7gx" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.344433 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-operator-scripts\") pod \"cinder-db-create-c6862\" (UID: \"e61f0916-b247-44ba-bf5c-fcd4e00ecd88\") " pod="openstack/cinder-db-create-c6862" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.344855 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5a612d-f335-4dfc-912b-30247387c806-operator-scripts\") pod \"barbican-db-create-crmlt\" (UID: \"fb5a612d-f335-4dfc-912b-30247387c806\") " pod="openstack/barbican-db-create-crmlt" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.363361 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-3829-account-create-update-gkgnn"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.364457 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3829-account-create-update-gkgnn" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.369230 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.381763 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dgjj\" (UniqueName: \"kubernetes.io/projected/fb5a612d-f335-4dfc-912b-30247387c806-kube-api-access-2dgjj\") pod \"barbican-db-create-crmlt\" (UID: \"fb5a612d-f335-4dfc-912b-30247387c806\") " pod="openstack/barbican-db-create-crmlt" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.381998 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3829-account-create-update-gkgnn"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.423648 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-crmlt" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.446330 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4f6x\" (UniqueName: \"kubernetes.io/projected/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-kube-api-access-j4f6x\") pod \"cinder-db-create-c6862\" (UID: \"e61f0916-b247-44ba-bf5c-fcd4e00ecd88\") " pod="openstack/cinder-db-create-c6862" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.446448 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s927m\" (UniqueName: \"kubernetes.io/projected/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-kube-api-access-s927m\") pod \"cinder-3829-account-create-update-gkgnn\" (UID: \"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820\") " pod="openstack/cinder-3829-account-create-update-gkgnn" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.446544 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rzrq\" (UniqueName: \"kubernetes.io/projected/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-kube-api-access-5rzrq\") pod \"barbican-9a83-account-create-update-5s7gx\" (UID: \"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0\") " pod="openstack/barbican-9a83-account-create-update-5s7gx" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.446700 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-operator-scripts\") pod \"barbican-9a83-account-create-update-5s7gx\" (UID: \"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0\") " pod="openstack/barbican-9a83-account-create-update-5s7gx" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.446875 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-operator-scripts\") pod \"cinder-db-create-c6862\" (UID: \"e61f0916-b247-44ba-bf5c-fcd4e00ecd88\") " pod="openstack/cinder-db-create-c6862" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.446951 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-operator-scripts\") pod \"cinder-3829-account-create-update-gkgnn\" (UID: \"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820\") " pod="openstack/cinder-3829-account-create-update-gkgnn" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.447771 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-operator-scripts\") pod \"barbican-9a83-account-create-update-5s7gx\" (UID: \"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0\") " pod="openstack/barbican-9a83-account-create-update-5s7gx" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.447917 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-operator-scripts\") pod \"cinder-db-create-c6862\" (UID: \"e61f0916-b247-44ba-bf5c-fcd4e00ecd88\") " pod="openstack/cinder-db-create-c6862" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.459739 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2b5xl"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.465417 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2b5xl" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.475165 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4f6x\" (UniqueName: \"kubernetes.io/projected/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-kube-api-access-j4f6x\") pod \"cinder-db-create-c6862\" (UID: \"e61f0916-b247-44ba-bf5c-fcd4e00ecd88\") " pod="openstack/cinder-db-create-c6862" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.476702 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2b5xl"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.486795 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rzrq\" (UniqueName: \"kubernetes.io/projected/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-kube-api-access-5rzrq\") pod \"barbican-9a83-account-create-update-5s7gx\" (UID: \"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0\") " pod="openstack/barbican-9a83-account-create-update-5s7gx" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.539376 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.548201 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-2bptw"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.548759 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-operator-scripts\") pod \"cinder-3829-account-create-update-gkgnn\" (UID: \"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820\") " pod="openstack/cinder-3829-account-create-update-gkgnn" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.548820 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqwh\" (UniqueName: \"kubernetes.io/projected/17bccce7-01c4-4456-a26b-c01374a263b5-kube-api-access-vqqwh\") pod \"neutron-db-create-2b5xl\" (UID: \"17bccce7-01c4-4456-a26b-c01374a263b5\") " pod="openstack/neutron-db-create-2b5xl" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.548873 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17bccce7-01c4-4456-a26b-c01374a263b5-operator-scripts\") pod \"neutron-db-create-2b5xl\" (UID: \"17bccce7-01c4-4456-a26b-c01374a263b5\") " pod="openstack/neutron-db-create-2b5xl" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.548959 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s927m\" (UniqueName: \"kubernetes.io/projected/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-kube-api-access-s927m\") pod \"cinder-3829-account-create-update-gkgnn\" (UID: \"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820\") " pod="openstack/cinder-3829-account-create-update-gkgnn" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.549543 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-operator-scripts\") pod \"cinder-3829-account-create-update-gkgnn\" (UID: \"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820\") " pod="openstack/cinder-3829-account-create-update-gkgnn" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.550300 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.553122 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.553581 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.553754 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.569770 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-slbzm" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.577014 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s927m\" (UniqueName: \"kubernetes.io/projected/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-kube-api-access-s927m\") pod \"cinder-3829-account-create-update-gkgnn\" (UID: \"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820\") " pod="openstack/cinder-3829-account-create-update-gkgnn" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.582463 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9a83-account-create-update-5s7gx" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.592677 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2bptw"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.608112 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c6862" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.622453 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4451-account-create-update-tld6b"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.623661 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4451-account-create-update-tld6b" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.639193 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.655973 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqqwh\" (UniqueName: \"kubernetes.io/projected/17bccce7-01c4-4456-a26b-c01374a263b5-kube-api-access-vqqwh\") pod \"neutron-db-create-2b5xl\" (UID: \"17bccce7-01c4-4456-a26b-c01374a263b5\") " pod="openstack/neutron-db-create-2b5xl" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.658612 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17bccce7-01c4-4456-a26b-c01374a263b5-operator-scripts\") pod \"neutron-db-create-2b5xl\" (UID: \"17bccce7-01c4-4456-a26b-c01374a263b5\") " pod="openstack/neutron-db-create-2b5xl" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.659063 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-combined-ca-bundle\") pod \"keystone-db-sync-2bptw\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.659284 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22rnh\" (UniqueName: \"kubernetes.io/projected/480c95bc-8a38-4304-af6c-3118a7571459-kube-api-access-22rnh\") pod \"keystone-db-sync-2bptw\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.659533 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-config-data\") pod \"keystone-db-sync-2bptw\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.662893 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17bccce7-01c4-4456-a26b-c01374a263b5-operator-scripts\") pod \"neutron-db-create-2b5xl\" (UID: \"17bccce7-01c4-4456-a26b-c01374a263b5\") " pod="openstack/neutron-db-create-2b5xl" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.700877 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqqwh\" (UniqueName: \"kubernetes.io/projected/17bccce7-01c4-4456-a26b-c01374a263b5-kube-api-access-vqqwh\") pod \"neutron-db-create-2b5xl\" (UID: \"17bccce7-01c4-4456-a26b-c01374a263b5\") " pod="openstack/neutron-db-create-2b5xl" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.707954 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4451-account-create-update-tld6b"] Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.728846 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3829-account-create-update-gkgnn" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.761123 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-combined-ca-bundle\") pod \"keystone-db-sync-2bptw\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.761217 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de549276-34a9-48bd-8635-a46910019250-operator-scripts\") pod \"neutron-4451-account-create-update-tld6b\" (UID: \"de549276-34a9-48bd-8635-a46910019250\") " pod="openstack/neutron-4451-account-create-update-tld6b" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.761267 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22rnh\" (UniqueName: \"kubernetes.io/projected/480c95bc-8a38-4304-af6c-3118a7571459-kube-api-access-22rnh\") pod \"keystone-db-sync-2bptw\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.761305 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-config-data\") pod \"keystone-db-sync-2bptw\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.761369 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrjrx\" (UniqueName: \"kubernetes.io/projected/de549276-34a9-48bd-8635-a46910019250-kube-api-access-xrjrx\") pod \"neutron-4451-account-create-update-tld6b\" (UID: \"de549276-34a9-48bd-8635-a46910019250\") " pod="openstack/neutron-4451-account-create-update-tld6b" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.766665 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-combined-ca-bundle\") pod \"keystone-db-sync-2bptw\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.767229 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-config-data\") pod \"keystone-db-sync-2bptw\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.781073 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22rnh\" (UniqueName: \"kubernetes.io/projected/480c95bc-8a38-4304-af6c-3118a7571459-kube-api-access-22rnh\") pod \"keystone-db-sync-2bptw\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.863262 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrjrx\" (UniqueName: \"kubernetes.io/projected/de549276-34a9-48bd-8635-a46910019250-kube-api-access-xrjrx\") pod \"neutron-4451-account-create-update-tld6b\" (UID: \"de549276-34a9-48bd-8635-a46910019250\") " pod="openstack/neutron-4451-account-create-update-tld6b" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.863359 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de549276-34a9-48bd-8635-a46910019250-operator-scripts\") pod \"neutron-4451-account-create-update-tld6b\" (UID: \"de549276-34a9-48bd-8635-a46910019250\") " pod="openstack/neutron-4451-account-create-update-tld6b" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.864373 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de549276-34a9-48bd-8635-a46910019250-operator-scripts\") pod \"neutron-4451-account-create-update-tld6b\" (UID: \"de549276-34a9-48bd-8635-a46910019250\") " pod="openstack/neutron-4451-account-create-update-tld6b" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.874305 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2b5xl" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.886734 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrjrx\" (UniqueName: \"kubernetes.io/projected/de549276-34a9-48bd-8635-a46910019250-kube-api-access-xrjrx\") pod \"neutron-4451-account-create-update-tld6b\" (UID: \"de549276-34a9-48bd-8635-a46910019250\") " pod="openstack/neutron-4451-account-create-update-tld6b" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.930600 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:24 crc kubenswrapper[4746]: I1211 10:11:24.971582 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4451-account-create-update-tld6b" Dec 11 10:11:29 crc kubenswrapper[4746]: E1211 10:11:29.060395 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api:current-podified" Dec 11 10:11:29 crc kubenswrapper[4746]: E1211 10:11:29.061329 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nf2st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-fh8cg_openstack(bce3e7bb-063f-4e71-b1d7-e3a14e1a9983): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:11:29 crc kubenswrapper[4746]: E1211 10:11:29.062513 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-fh8cg" podUID="bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.290057 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.429400 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-scripts\") pod \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.430291 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run-ovn\") pod \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.430329 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-log-ovn\") pod \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.430375 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "e41a9dae-476f-4f96-93c2-b94ba44dcf21" (UID: "e41a9dae-476f-4f96-93c2-b94ba44dcf21"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.430428 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run\") pod \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.430456 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bgvq\" (UniqueName: \"kubernetes.io/projected/e41a9dae-476f-4f96-93c2-b94ba44dcf21-kube-api-access-4bgvq\") pod \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.430514 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-additional-scripts\") pod \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\" (UID: \"e41a9dae-476f-4f96-93c2-b94ba44dcf21\") " Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.430496 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run" (OuterVolumeSpecName: "var-run") pod "e41a9dae-476f-4f96-93c2-b94ba44dcf21" (UID: "e41a9dae-476f-4f96-93c2-b94ba44dcf21"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.430896 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-scripts" (OuterVolumeSpecName: "scripts") pod "e41a9dae-476f-4f96-93c2-b94ba44dcf21" (UID: "e41a9dae-476f-4f96-93c2-b94ba44dcf21"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.431195 4746 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.431210 4746 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.431384 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "e41a9dae-476f-4f96-93c2-b94ba44dcf21" (UID: "e41a9dae-476f-4f96-93c2-b94ba44dcf21"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.434239 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "e41a9dae-476f-4f96-93c2-b94ba44dcf21" (UID: "e41a9dae-476f-4f96-93c2-b94ba44dcf21"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.437169 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e41a9dae-476f-4f96-93c2-b94ba44dcf21-kube-api-access-4bgvq" (OuterVolumeSpecName: "kube-api-access-4bgvq") pod "e41a9dae-476f-4f96-93c2-b94ba44dcf21" (UID: "e41a9dae-476f-4f96-93c2-b94ba44dcf21"). InnerVolumeSpecName "kube-api-access-4bgvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.533766 4746 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e41a9dae-476f-4f96-93c2-b94ba44dcf21-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.533804 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bgvq\" (UniqueName: \"kubernetes.io/projected/e41a9dae-476f-4f96-93c2-b94ba44dcf21-kube-api-access-4bgvq\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.533828 4746 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.533840 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e41a9dae-476f-4f96-93c2-b94ba44dcf21-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.577362 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-242vs-config-dlrgc" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.578196 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-242vs-config-dlrgc" event={"ID":"e41a9dae-476f-4f96-93c2-b94ba44dcf21","Type":"ContainerDied","Data":"c5146d96a9a83c257461e4a5b96b50ccdfbec2719b31843781affe7ff8f2ec57"} Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.578255 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5146d96a9a83c257461e4a5b96b50ccdfbec2719b31843781affe7ff8f2ec57" Dec 11 10:11:29 crc kubenswrapper[4746]: E1211 10:11:29.584322 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api:current-podified\\\"\"" pod="openstack/glance-db-sync-fh8cg" podUID="bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.594754 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-c6862"] Dec 11 10:11:29 crc kubenswrapper[4746]: W1211 10:11:29.600907 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode61f0916_b247_44ba_bf5c_fcd4e00ecd88.slice/crio-3514218f926b1abd2c6b330bc6b6b4e7483652cf55a914ab92d7f56c1800c1c9 WatchSource:0}: Error finding container 3514218f926b1abd2c6b330bc6b6b4e7483652cf55a914ab92d7f56c1800c1c9: Status 404 returned error can't find the container with id 3514218f926b1abd2c6b330bc6b6b4e7483652cf55a914ab92d7f56c1800c1c9 Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.880178 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.880679 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.880729 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.881453 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3599b9865470e5f66c552862b8f5ba28a4b29a63faedd683cf231e8c14b3f2f"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.881504 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://f3599b9865470e5f66c552862b8f5ba28a4b29a63faedd683cf231e8c14b3f2f" gracePeriod=600 Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.918534 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-crmlt"] Dec 11 10:11:29 crc kubenswrapper[4746]: W1211 10:11:29.946174 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17bccce7_01c4_4456_a26b_c01374a263b5.slice/crio-4a1af834614e402b1a00acc2af1a28e3cba0dc6da19c229e84382644d78f8876 WatchSource:0}: Error finding container 4a1af834614e402b1a00acc2af1a28e3cba0dc6da19c229e84382644d78f8876: Status 404 returned error can't find the container with id 4a1af834614e402b1a00acc2af1a28e3cba0dc6da19c229e84382644d78f8876 Dec 11 10:11:29 crc kubenswrapper[4746]: W1211 10:11:29.947477 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podceb6d8cf_2891_4f5e_85e1_af1cc4dd2820.slice/crio-db8d9dc8e2b30473c0d76de98d8ebb35bccaa0f5ee00cc54af59c6ba12faee2b WatchSource:0}: Error finding container db8d9dc8e2b30473c0d76de98d8ebb35bccaa0f5ee00cc54af59c6ba12faee2b: Status 404 returned error can't find the container with id db8d9dc8e2b30473c0d76de98d8ebb35bccaa0f5ee00cc54af59c6ba12faee2b Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.947532 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-3829-account-create-update-gkgnn"] Dec 11 10:11:29 crc kubenswrapper[4746]: W1211 10:11:29.949417 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb5a612d_f335_4dfc_912b_30247387c806.slice/crio-c17d8062242545898d82fb9c4035b070f89f1b9cac532e65c34523bcabc4005a WatchSource:0}: Error finding container c17d8062242545898d82fb9c4035b070f89f1b9cac532e65c34523bcabc4005a: Status 404 returned error can't find the container with id c17d8062242545898d82fb9c4035b070f89f1b9cac532e65c34523bcabc4005a Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.968415 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2b5xl"] Dec 11 10:11:29 crc kubenswrapper[4746]: I1211 10:11:29.975296 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-2bptw"] Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.105195 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 11 10:11:30 crc kubenswrapper[4746]: W1211 10:11:30.112371 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3df27f8b_76bd_441d_9c3a_2b8bd1f250c7.slice/crio-a89d8d3d76399c0c76ee29bdf7e2232f6b6c085ef3540169d9d6b4e5d4c6f9a4 WatchSource:0}: Error finding container a89d8d3d76399c0c76ee29bdf7e2232f6b6c085ef3540169d9d6b4e5d4c6f9a4: Status 404 returned error can't find the container with id a89d8d3d76399c0c76ee29bdf7e2232f6b6c085ef3540169d9d6b4e5d4c6f9a4 Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.114931 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-9a83-account-create-update-5s7gx"] Dec 11 10:11:30 crc kubenswrapper[4746]: W1211 10:11:30.115720 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podada9f489_ce1d_4251_a6e3_cfa7f322d9f0.slice/crio-17a07f7ac08b6088a4c042e06d5ec59ddf1fa4312571ceb3ff9360a9c5d49f51 WatchSource:0}: Error finding container 17a07f7ac08b6088a4c042e06d5ec59ddf1fa4312571ceb3ff9360a9c5d49f51: Status 404 returned error can't find the container with id 17a07f7ac08b6088a4c042e06d5ec59ddf1fa4312571ceb3ff9360a9c5d49f51 Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.160889 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4451-account-create-update-tld6b"] Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.420871 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-242vs-config-dlrgc"] Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.427626 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-242vs-config-dlrgc"] Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.583080 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-242vs-config-dwbxl"] Dec 11 10:11:30 crc kubenswrapper[4746]: E1211 10:11:30.583807 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e41a9dae-476f-4f96-93c2-b94ba44dcf21" containerName="ovn-config" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.583834 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e41a9dae-476f-4f96-93c2-b94ba44dcf21" containerName="ovn-config" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.584099 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e41a9dae-476f-4f96-93c2-b94ba44dcf21" containerName="ovn-config" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.589338 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.595845 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.597829 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-242vs-config-dwbxl"] Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.601477 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"a89d8d3d76399c0c76ee29bdf7e2232f6b6c085ef3540169d9d6b4e5d4c6f9a4"} Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.617346 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="f3599b9865470e5f66c552862b8f5ba28a4b29a63faedd683cf231e8c14b3f2f" exitCode=0 Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.617429 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"f3599b9865470e5f66c552862b8f5ba28a4b29a63faedd683cf231e8c14b3f2f"} Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.617474 4746 scope.go:117] "RemoveContainer" containerID="6d9fcc5a422995e470abf6d012848f503e289189e1935c89236af0c3efd0b192" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.620424 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2bptw" event={"ID":"480c95bc-8a38-4304-af6c-3118a7571459","Type":"ContainerStarted","Data":"634b033db7f2c86be414fc1aba1723d708b3b7e2548b8b08ceea90f33f474e4c"} Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.621825 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3829-account-create-update-gkgnn" event={"ID":"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820","Type":"ContainerStarted","Data":"db8d9dc8e2b30473c0d76de98d8ebb35bccaa0f5ee00cc54af59c6ba12faee2b"} Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.627028 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-crmlt" event={"ID":"fb5a612d-f335-4dfc-912b-30247387c806","Type":"ContainerStarted","Data":"c17d8062242545898d82fb9c4035b070f89f1b9cac532e65c34523bcabc4005a"} Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.627962 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2b5xl" event={"ID":"17bccce7-01c4-4456-a26b-c01374a263b5","Type":"ContainerStarted","Data":"4a1af834614e402b1a00acc2af1a28e3cba0dc6da19c229e84382644d78f8876"} Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.635271 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4451-account-create-update-tld6b" event={"ID":"de549276-34a9-48bd-8635-a46910019250","Type":"ContainerStarted","Data":"0d60fb364ec41a8e7285f03dacbdf41b2ba82b95fd3a773ad64b169f2d8578fd"} Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.636857 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9a83-account-create-update-5s7gx" event={"ID":"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0","Type":"ContainerStarted","Data":"17a07f7ac08b6088a4c042e06d5ec59ddf1fa4312571ceb3ff9360a9c5d49f51"} Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.639795 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c6862" event={"ID":"e61f0916-b247-44ba-bf5c-fcd4e00ecd88","Type":"ContainerStarted","Data":"25d3f9973d044327d2f3d3592206e84765edc38b80c461214b1d6b49256d8209"} Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.640074 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c6862" event={"ID":"e61f0916-b247-44ba-bf5c-fcd4e00ecd88","Type":"ContainerStarted","Data":"3514218f926b1abd2c6b330bc6b6b4e7483652cf55a914ab92d7f56c1800c1c9"} Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.658573 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-c6862" podStartSLOduration=6.65855544 podStartE2EDuration="6.65855544s" podCreationTimestamp="2025-12-11 10:11:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:11:30.656456683 +0000 UTC m=+1063.516319996" watchObservedRunningTime="2025-12-11 10:11:30.65855544 +0000 UTC m=+1063.518418753" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.765730 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-log-ovn\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.765829 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-additional-scripts\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.765861 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.765910 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqff6\" (UniqueName: \"kubernetes.io/projected/7c8a1717-6a29-4da9-adc7-66191893de25-kube-api-access-dqff6\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.766071 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-scripts\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.766160 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run-ovn\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.867495 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-log-ovn\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.867571 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-additional-scripts\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.867596 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.867849 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqff6\" (UniqueName: \"kubernetes.io/projected/7c8a1717-6a29-4da9-adc7-66191893de25-kube-api-access-dqff6\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.867928 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-scripts\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.867994 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run-ovn\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.868512 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.868749 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-log-ovn\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.868923 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run-ovn\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.869538 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-additional-scripts\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.871616 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-scripts\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.902267 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqff6\" (UniqueName: \"kubernetes.io/projected/7c8a1717-6a29-4da9-adc7-66191893de25-kube-api-access-dqff6\") pod \"ovn-controller-242vs-config-dwbxl\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:30 crc kubenswrapper[4746]: I1211 10:11:30.913370 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.614622 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-242vs-config-dwbxl"] Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.640010 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e41a9dae-476f-4f96-93c2-b94ba44dcf21" path="/var/lib/kubelet/pods/e41a9dae-476f-4f96-93c2-b94ba44dcf21/volumes" Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.649898 4746 generic.go:334] "Generic (PLEG): container finished" podID="ada9f489-ce1d-4251-a6e3-cfa7f322d9f0" containerID="0b05f61810e28bb282fbacb0d00551ff3b6bb226eaa6d00f6a20748aaf1d382b" exitCode=0 Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.649974 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9a83-account-create-update-5s7gx" event={"ID":"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0","Type":"ContainerDied","Data":"0b05f61810e28bb282fbacb0d00551ff3b6bb226eaa6d00f6a20748aaf1d382b"} Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.652938 4746 generic.go:334] "Generic (PLEG): container finished" podID="ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820" containerID="6d9bd831cfb0a1e3c1dd8758c55fdbd478b7b9562dafa9aafbb589a28f48f569" exitCode=0 Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.653025 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3829-account-create-update-gkgnn" event={"ID":"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820","Type":"ContainerDied","Data":"6d9bd831cfb0a1e3c1dd8758c55fdbd478b7b9562dafa9aafbb589a28f48f569"} Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.656096 4746 generic.go:334] "Generic (PLEG): container finished" podID="e61f0916-b247-44ba-bf5c-fcd4e00ecd88" containerID="25d3f9973d044327d2f3d3592206e84765edc38b80c461214b1d6b49256d8209" exitCode=0 Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.656153 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c6862" event={"ID":"e61f0916-b247-44ba-bf5c-fcd4e00ecd88","Type":"ContainerDied","Data":"25d3f9973d044327d2f3d3592206e84765edc38b80c461214b1d6b49256d8209"} Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.658118 4746 generic.go:334] "Generic (PLEG): container finished" podID="fb5a612d-f335-4dfc-912b-30247387c806" containerID="10c4b969993eb929646967e59f54a631e907bdb73d553f64713714bff1a2507a" exitCode=0 Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.658170 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-crmlt" event={"ID":"fb5a612d-f335-4dfc-912b-30247387c806","Type":"ContainerDied","Data":"10c4b969993eb929646967e59f54a631e907bdb73d553f64713714bff1a2507a"} Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.661552 4746 generic.go:334] "Generic (PLEG): container finished" podID="17bccce7-01c4-4456-a26b-c01374a263b5" containerID="a75e2635229c480fd856683d0b9651efb117cc90f7da069c267062aec45034e4" exitCode=0 Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.661715 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2b5xl" event={"ID":"17bccce7-01c4-4456-a26b-c01374a263b5","Type":"ContainerDied","Data":"a75e2635229c480fd856683d0b9651efb117cc90f7da069c267062aec45034e4"} Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.668897 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"c83e849437aeb890459f914e7f689680afdaaf0b057ea749f6e91f887067183f"} Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.673422 4746 generic.go:334] "Generic (PLEG): container finished" podID="de549276-34a9-48bd-8635-a46910019250" containerID="d4c09a6f8c51c3c63d8aa4c59c4bf7bf5a527c45f91d71af9babaf64fb738c80" exitCode=0 Dec 11 10:11:31 crc kubenswrapper[4746]: I1211 10:11:31.674405 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4451-account-create-update-tld6b" event={"ID":"de549276-34a9-48bd-8635-a46910019250","Type":"ContainerDied","Data":"d4c09a6f8c51c3c63d8aa4c59c4bf7bf5a527c45f91d71af9babaf64fb738c80"} Dec 11 10:11:31 crc kubenswrapper[4746]: W1211 10:11:31.913462 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c8a1717_6a29_4da9_adc7_66191893de25.slice/crio-a7ac6ef6d5eb1af2482e1e224575a6e4996747e331d6382d207d7e09bffa81c6 WatchSource:0}: Error finding container a7ac6ef6d5eb1af2482e1e224575a6e4996747e331d6382d207d7e09bffa81c6: Status 404 returned error can't find the container with id a7ac6ef6d5eb1af2482e1e224575a6e4996747e331d6382d207d7e09bffa81c6 Dec 11 10:11:32 crc kubenswrapper[4746]: I1211 10:11:32.688626 4746 generic.go:334] "Generic (PLEG): container finished" podID="7c8a1717-6a29-4da9-adc7-66191893de25" containerID="5f84a67f069fac5d08a89946a38929343f19acb2aae580250e324e5a35f213b9" exitCode=0 Dec 11 10:11:32 crc kubenswrapper[4746]: I1211 10:11:32.689289 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-242vs-config-dwbxl" event={"ID":"7c8a1717-6a29-4da9-adc7-66191893de25","Type":"ContainerDied","Data":"5f84a67f069fac5d08a89946a38929343f19acb2aae580250e324e5a35f213b9"} Dec 11 10:11:32 crc kubenswrapper[4746]: I1211 10:11:32.689364 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-242vs-config-dwbxl" event={"ID":"7c8a1717-6a29-4da9-adc7-66191893de25","Type":"ContainerStarted","Data":"a7ac6ef6d5eb1af2482e1e224575a6e4996747e331d6382d207d7e09bffa81c6"} Dec 11 10:11:32 crc kubenswrapper[4746]: I1211 10:11:32.693283 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"4324252f9665d8a3835d47e109be3ef76efb110f83c9fe37d6c1e50af2b3d3a4"} Dec 11 10:11:32 crc kubenswrapper[4746]: I1211 10:11:32.693364 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"ecd30a218131435ccb19b43f2e31cba7838205e67064f7778de1d2764b7351f3"} Dec 11 10:11:33 crc kubenswrapper[4746]: I1211 10:11:33.710484 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"e3ccf205c9adc1de83c9e22e592b3003bccb3158c9098d27edb8ac729c37396d"} Dec 11 10:11:33 crc kubenswrapper[4746]: I1211 10:11:33.710845 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"784fc294dca2d35b080de5eed81bb63587778c201927636f573b43af33d20110"} Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.085029 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.097126 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c6862" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.116267 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9a83-account-create-update-5s7gx" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.158075 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-crmlt" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.161273 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2b5xl" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.185884 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4451-account-create-update-tld6b" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187146 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run-ovn\") pod \"7c8a1717-6a29-4da9-adc7-66191893de25\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187227 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-log-ovn\") pod \"7c8a1717-6a29-4da9-adc7-66191893de25\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187274 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5a612d-f335-4dfc-912b-30247387c806-operator-scripts\") pod \"fb5a612d-f335-4dfc-912b-30247387c806\" (UID: \"fb5a612d-f335-4dfc-912b-30247387c806\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187323 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4f6x\" (UniqueName: \"kubernetes.io/projected/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-kube-api-access-j4f6x\") pod \"e61f0916-b247-44ba-bf5c-fcd4e00ecd88\" (UID: \"e61f0916-b247-44ba-bf5c-fcd4e00ecd88\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187379 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqqwh\" (UniqueName: \"kubernetes.io/projected/17bccce7-01c4-4456-a26b-c01374a263b5-kube-api-access-vqqwh\") pod \"17bccce7-01c4-4456-a26b-c01374a263b5\" (UID: \"17bccce7-01c4-4456-a26b-c01374a263b5\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187424 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-scripts\") pod \"7c8a1717-6a29-4da9-adc7-66191893de25\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187475 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "7c8a1717-6a29-4da9-adc7-66191893de25" (UID: "7c8a1717-6a29-4da9-adc7-66191893de25"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187480 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-operator-scripts\") pod \"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0\" (UID: \"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187601 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-additional-scripts\") pod \"7c8a1717-6a29-4da9-adc7-66191893de25\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187660 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rzrq\" (UniqueName: \"kubernetes.io/projected/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-kube-api-access-5rzrq\") pod \"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0\" (UID: \"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.187699 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de549276-34a9-48bd-8635-a46910019250-operator-scripts\") pod \"de549276-34a9-48bd-8635-a46910019250\" (UID: \"de549276-34a9-48bd-8635-a46910019250\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.188371 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ada9f489-ce1d-4251-a6e3-cfa7f322d9f0" (UID: "ada9f489-ce1d-4251-a6e3-cfa7f322d9f0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.188371 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb5a612d-f335-4dfc-912b-30247387c806-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb5a612d-f335-4dfc-912b-30247387c806" (UID: "fb5a612d-f335-4dfc-912b-30247387c806"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.189255 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de549276-34a9-48bd-8635-a46910019250-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "de549276-34a9-48bd-8635-a46910019250" (UID: "de549276-34a9-48bd-8635-a46910019250"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.189301 4746 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.189336 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "7c8a1717-6a29-4da9-adc7-66191893de25" (UID: "7c8a1717-6a29-4da9-adc7-66191893de25"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.189413 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "7c8a1717-6a29-4da9-adc7-66191893de25" (UID: "7c8a1717-6a29-4da9-adc7-66191893de25"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.190459 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-scripts" (OuterVolumeSpecName: "scripts") pod "7c8a1717-6a29-4da9-adc7-66191893de25" (UID: "7c8a1717-6a29-4da9-adc7-66191893de25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.201949 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-kube-api-access-j4f6x" (OuterVolumeSpecName: "kube-api-access-j4f6x") pod "e61f0916-b247-44ba-bf5c-fcd4e00ecd88" (UID: "e61f0916-b247-44ba-bf5c-fcd4e00ecd88"). InnerVolumeSpecName "kube-api-access-j4f6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.217761 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17bccce7-01c4-4456-a26b-c01374a263b5-kube-api-access-vqqwh" (OuterVolumeSpecName: "kube-api-access-vqqwh") pod "17bccce7-01c4-4456-a26b-c01374a263b5" (UID: "17bccce7-01c4-4456-a26b-c01374a263b5"). InnerVolumeSpecName "kube-api-access-vqqwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.229012 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-kube-api-access-5rzrq" (OuterVolumeSpecName: "kube-api-access-5rzrq") pod "ada9f489-ce1d-4251-a6e3-cfa7f322d9f0" (UID: "ada9f489-ce1d-4251-a6e3-cfa7f322d9f0"). InnerVolumeSpecName "kube-api-access-5rzrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.250225 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3829-account-create-update-gkgnn" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.291373 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run\") pod \"7c8a1717-6a29-4da9-adc7-66191893de25\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.291519 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqff6\" (UniqueName: \"kubernetes.io/projected/7c8a1717-6a29-4da9-adc7-66191893de25-kube-api-access-dqff6\") pod \"7c8a1717-6a29-4da9-adc7-66191893de25\" (UID: \"7c8a1717-6a29-4da9-adc7-66191893de25\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.291946 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s927m\" (UniqueName: \"kubernetes.io/projected/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-kube-api-access-s927m\") pod \"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820\" (UID: \"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.292009 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrjrx\" (UniqueName: \"kubernetes.io/projected/de549276-34a9-48bd-8635-a46910019250-kube-api-access-xrjrx\") pod \"de549276-34a9-48bd-8635-a46910019250\" (UID: \"de549276-34a9-48bd-8635-a46910019250\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.292036 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17bccce7-01c4-4456-a26b-c01374a263b5-operator-scripts\") pod \"17bccce7-01c4-4456-a26b-c01374a263b5\" (UID: \"17bccce7-01c4-4456-a26b-c01374a263b5\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.292073 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-operator-scripts\") pod \"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820\" (UID: \"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.292105 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-operator-scripts\") pod \"e61f0916-b247-44ba-bf5c-fcd4e00ecd88\" (UID: \"e61f0916-b247-44ba-bf5c-fcd4e00ecd88\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.292124 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dgjj\" (UniqueName: \"kubernetes.io/projected/fb5a612d-f335-4dfc-912b-30247387c806-kube-api-access-2dgjj\") pod \"fb5a612d-f335-4dfc-912b-30247387c806\" (UID: \"fb5a612d-f335-4dfc-912b-30247387c806\") " Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.292305 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run" (OuterVolumeSpecName: "var-run") pod "7c8a1717-6a29-4da9-adc7-66191893de25" (UID: "7c8a1717-6a29-4da9-adc7-66191893de25"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.293389 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17bccce7-01c4-4456-a26b-c01374a263b5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17bccce7-01c4-4456-a26b-c01374a263b5" (UID: "17bccce7-01c4-4456-a26b-c01374a263b5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.293593 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820" (UID: "ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.294354 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e61f0916-b247-44ba-bf5c-fcd4e00ecd88" (UID: "e61f0916-b247-44ba-bf5c-fcd4e00ecd88"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299443 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299470 4746 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299482 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rzrq\" (UniqueName: \"kubernetes.io/projected/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0-kube-api-access-5rzrq\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299495 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/de549276-34a9-48bd-8635-a46910019250-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299512 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17bccce7-01c4-4456-a26b-c01374a263b5-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299522 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299531 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299545 4746 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299554 4746 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c8a1717-6a29-4da9-adc7-66191893de25-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299566 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb5a612d-f335-4dfc-912b-30247387c806-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299579 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4f6x\" (UniqueName: \"kubernetes.io/projected/e61f0916-b247-44ba-bf5c-fcd4e00ecd88-kube-api-access-j4f6x\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299590 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqqwh\" (UniqueName: \"kubernetes.io/projected/17bccce7-01c4-4456-a26b-c01374a263b5-kube-api-access-vqqwh\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.299602 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7c8a1717-6a29-4da9-adc7-66191893de25-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.301868 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de549276-34a9-48bd-8635-a46910019250-kube-api-access-xrjrx" (OuterVolumeSpecName: "kube-api-access-xrjrx") pod "de549276-34a9-48bd-8635-a46910019250" (UID: "de549276-34a9-48bd-8635-a46910019250"). InnerVolumeSpecName "kube-api-access-xrjrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.302121 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c8a1717-6a29-4da9-adc7-66191893de25-kube-api-access-dqff6" (OuterVolumeSpecName: "kube-api-access-dqff6") pod "7c8a1717-6a29-4da9-adc7-66191893de25" (UID: "7c8a1717-6a29-4da9-adc7-66191893de25"). InnerVolumeSpecName "kube-api-access-dqff6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.302333 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-kube-api-access-s927m" (OuterVolumeSpecName: "kube-api-access-s927m") pod "ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820" (UID: "ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820"). InnerVolumeSpecName "kube-api-access-s927m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.306737 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb5a612d-f335-4dfc-912b-30247387c806-kube-api-access-2dgjj" (OuterVolumeSpecName: "kube-api-access-2dgjj") pod "fb5a612d-f335-4dfc-912b-30247387c806" (UID: "fb5a612d-f335-4dfc-912b-30247387c806"). InnerVolumeSpecName "kube-api-access-2dgjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.401982 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s927m\" (UniqueName: \"kubernetes.io/projected/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820-kube-api-access-s927m\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.402058 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrjrx\" (UniqueName: \"kubernetes.io/projected/de549276-34a9-48bd-8635-a46910019250-kube-api-access-xrjrx\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.402080 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dgjj\" (UniqueName: \"kubernetes.io/projected/fb5a612d-f335-4dfc-912b-30247387c806-kube-api-access-2dgjj\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.402096 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqff6\" (UniqueName: \"kubernetes.io/projected/7c8a1717-6a29-4da9-adc7-66191893de25-kube-api-access-dqff6\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.749805 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2bptw" event={"ID":"480c95bc-8a38-4304-af6c-3118a7571459","Type":"ContainerStarted","Data":"08f26fd601ce04a319b4451e269e57884f7cb5097c43e042458fa8cd28fb3917"} Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.752074 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-9a83-account-create-update-5s7gx" event={"ID":"ada9f489-ce1d-4251-a6e3-cfa7f322d9f0","Type":"ContainerDied","Data":"17a07f7ac08b6088a4c042e06d5ec59ddf1fa4312571ceb3ff9360a9c5d49f51"} Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.752165 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17a07f7ac08b6088a4c042e06d5ec59ddf1fa4312571ceb3ff9360a9c5d49f51" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.752393 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-9a83-account-create-update-5s7gx" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.759883 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-242vs-config-dwbxl" event={"ID":"7c8a1717-6a29-4da9-adc7-66191893de25","Type":"ContainerDied","Data":"a7ac6ef6d5eb1af2482e1e224575a6e4996747e331d6382d207d7e09bffa81c6"} Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.759914 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7ac6ef6d5eb1af2482e1e224575a6e4996747e331d6382d207d7e09bffa81c6" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.759944 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-242vs-config-dwbxl" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.770815 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-crmlt" event={"ID":"fb5a612d-f335-4dfc-912b-30247387c806","Type":"ContainerDied","Data":"c17d8062242545898d82fb9c4035b070f89f1b9cac532e65c34523bcabc4005a"} Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.770884 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c17d8062242545898d82fb9c4035b070f89f1b9cac532e65c34523bcabc4005a" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.770865 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-crmlt" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.791745 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"76e893ed168853630e2a37392cf5e5c6dd90e13591134e07d0048d6811f67c08"} Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.797497 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2b5xl" event={"ID":"17bccce7-01c4-4456-a26b-c01374a263b5","Type":"ContainerDied","Data":"4a1af834614e402b1a00acc2af1a28e3cba0dc6da19c229e84382644d78f8876"} Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.797936 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a1af834614e402b1a00acc2af1a28e3cba0dc6da19c229e84382644d78f8876" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.798132 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2b5xl" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.809565 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-2bptw" podStartSLOduration=6.864042354 podStartE2EDuration="12.8095434s" podCreationTimestamp="2025-12-11 10:11:24 +0000 UTC" firstStartedPulling="2025-12-11 10:11:29.975219901 +0000 UTC m=+1062.835083214" lastFinishedPulling="2025-12-11 10:11:35.920720947 +0000 UTC m=+1068.780584260" observedRunningTime="2025-12-11 10:11:36.800401434 +0000 UTC m=+1069.660264757" watchObservedRunningTime="2025-12-11 10:11:36.8095434 +0000 UTC m=+1069.669406713" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.809937 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4451-account-create-update-tld6b" event={"ID":"de549276-34a9-48bd-8635-a46910019250","Type":"ContainerDied","Data":"0d60fb364ec41a8e7285f03dacbdf41b2ba82b95fd3a773ad64b169f2d8578fd"} Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.810000 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d60fb364ec41a8e7285f03dacbdf41b2ba82b95fd3a773ad64b169f2d8578fd" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.810164 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4451-account-create-update-tld6b" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.818869 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-3829-account-create-update-gkgnn" event={"ID":"ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820","Type":"ContainerDied","Data":"db8d9dc8e2b30473c0d76de98d8ebb35bccaa0f5ee00cc54af59c6ba12faee2b"} Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.819582 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db8d9dc8e2b30473c0d76de98d8ebb35bccaa0f5ee00cc54af59c6ba12faee2b" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.818893 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-3829-account-create-update-gkgnn" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.823814 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c6862" event={"ID":"e61f0916-b247-44ba-bf5c-fcd4e00ecd88","Type":"ContainerDied","Data":"3514218f926b1abd2c6b330bc6b6b4e7483652cf55a914ab92d7f56c1800c1c9"} Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.823867 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3514218f926b1abd2c6b330bc6b6b4e7483652cf55a914ab92d7f56c1800c1c9" Dec 11 10:11:36 crc kubenswrapper[4746]: I1211 10:11:36.823972 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c6862" Dec 11 10:11:37 crc kubenswrapper[4746]: I1211 10:11:37.254745 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-242vs-config-dwbxl"] Dec 11 10:11:37 crc kubenswrapper[4746]: I1211 10:11:37.261330 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-242vs-config-dwbxl"] Dec 11 10:11:37 crc kubenswrapper[4746]: I1211 10:11:37.646580 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c8a1717-6a29-4da9-adc7-66191893de25" path="/var/lib/kubelet/pods/7c8a1717-6a29-4da9-adc7-66191893de25/volumes" Dec 11 10:11:37 crc kubenswrapper[4746]: I1211 10:11:37.842240 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"49680374e7971b032e64aa17b1b6052735cba28ed9b2154cf1f079a1dccbd610"} Dec 11 10:11:37 crc kubenswrapper[4746]: I1211 10:11:37.842314 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"3c385ac1d01524956c6bae354c57f62fb2505bdfb88a1d8ced74f69a20801e2d"} Dec 11 10:11:39 crc kubenswrapper[4746]: I1211 10:11:39.888443 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"e046a233505e533ef60a223846c4b1bfe2aa3a91aa10ba46f34951c3321885f4"} Dec 11 10:11:41 crc kubenswrapper[4746]: I1211 10:11:41.948442 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"82cbd50466383467acf88c69074e6d8bb1867b32a4cc5e0d4d14361592db34ae"} Dec 11 10:11:41 crc kubenswrapper[4746]: I1211 10:11:41.949382 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"72bff0b3e9e50b1447f97c634a857a48460fb0069193aa8d62cded99fd7f850d"} Dec 11 10:11:41 crc kubenswrapper[4746]: I1211 10:11:41.949405 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"5bc8b3ba3398e4cebaf32aeb3759a76cebf1db841f514617117b8f3fcd8fb39b"} Dec 11 10:11:42 crc kubenswrapper[4746]: I1211 10:11:42.971619 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"3a66080e1de2a5bc69f6079d0448b8652c1b37130b795ec0e3edbf6ecd490ecb"} Dec 11 10:11:42 crc kubenswrapper[4746]: I1211 10:11:42.972161 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"179b15fd1f7b312e6a0a0aeb628f028c945f933f785c3364a608c65fb53a3682"} Dec 11 10:11:42 crc kubenswrapper[4746]: I1211 10:11:42.972173 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"048d301b2fdf343dda0af306ed5bedb20f138f4507eab5c1cb3f018ec0a1ec2c"} Dec 11 10:11:42 crc kubenswrapper[4746]: I1211 10:11:42.972185 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"3df27f8b-76bd-441d-9c3a-2b8bd1f250c7","Type":"ContainerStarted","Data":"46ceed3df381a657868387d3cc7e2a49eaa37bd80da7810d0d21234851ae6496"} Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.015589 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=43.097847741 podStartE2EDuration="54.015565679s" podCreationTimestamp="2025-12-11 10:10:49 +0000 UTC" firstStartedPulling="2025-12-11 10:11:30.114869855 +0000 UTC m=+1062.974733168" lastFinishedPulling="2025-12-11 10:11:41.032587793 +0000 UTC m=+1073.892451106" observedRunningTime="2025-12-11 10:11:43.013519984 +0000 UTC m=+1075.873383307" watchObservedRunningTime="2025-12-11 10:11:43.015565679 +0000 UTC m=+1075.875428992" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.375332 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-j22rn"] Dec 11 10:11:43 crc kubenswrapper[4746]: E1211 10:11:43.376225 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820" containerName="mariadb-account-create-update" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376245 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820" containerName="mariadb-account-create-update" Dec 11 10:11:43 crc kubenswrapper[4746]: E1211 10:11:43.376263 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ada9f489-ce1d-4251-a6e3-cfa7f322d9f0" containerName="mariadb-account-create-update" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376272 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ada9f489-ce1d-4251-a6e3-cfa7f322d9f0" containerName="mariadb-account-create-update" Dec 11 10:11:43 crc kubenswrapper[4746]: E1211 10:11:43.376290 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de549276-34a9-48bd-8635-a46910019250" containerName="mariadb-account-create-update" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376300 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="de549276-34a9-48bd-8635-a46910019250" containerName="mariadb-account-create-update" Dec 11 10:11:43 crc kubenswrapper[4746]: E1211 10:11:43.376318 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb5a612d-f335-4dfc-912b-30247387c806" containerName="mariadb-database-create" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376331 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb5a612d-f335-4dfc-912b-30247387c806" containerName="mariadb-database-create" Dec 11 10:11:43 crc kubenswrapper[4746]: E1211 10:11:43.376350 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61f0916-b247-44ba-bf5c-fcd4e00ecd88" containerName="mariadb-database-create" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376359 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61f0916-b247-44ba-bf5c-fcd4e00ecd88" containerName="mariadb-database-create" Dec 11 10:11:43 crc kubenswrapper[4746]: E1211 10:11:43.376377 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c8a1717-6a29-4da9-adc7-66191893de25" containerName="ovn-config" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376387 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c8a1717-6a29-4da9-adc7-66191893de25" containerName="ovn-config" Dec 11 10:11:43 crc kubenswrapper[4746]: E1211 10:11:43.376398 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17bccce7-01c4-4456-a26b-c01374a263b5" containerName="mariadb-database-create" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376405 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="17bccce7-01c4-4456-a26b-c01374a263b5" containerName="mariadb-database-create" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376765 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820" containerName="mariadb-account-create-update" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376794 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="de549276-34a9-48bd-8635-a46910019250" containerName="mariadb-account-create-update" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376809 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c8a1717-6a29-4da9-adc7-66191893de25" containerName="ovn-config" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376825 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="17bccce7-01c4-4456-a26b-c01374a263b5" containerName="mariadb-database-create" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376841 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ada9f489-ce1d-4251-a6e3-cfa7f322d9f0" containerName="mariadb-account-create-update" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376853 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb5a612d-f335-4dfc-912b-30247387c806" containerName="mariadb-database-create" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.376866 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61f0916-b247-44ba-bf5c-fcd4e00ecd88" containerName="mariadb-database-create" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.378188 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.381815 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.423069 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-j22rn"] Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.439224 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.439316 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.439345 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.439459 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.439519 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-config\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.439558 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq6qk\" (UniqueName: \"kubernetes.io/projected/80a3b4ee-fc40-43c7-8001-a7f02844b05d-kube-api-access-pq6qk\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.541582 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.541701 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.541724 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.541765 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.541876 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-config\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.541910 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq6qk\" (UniqueName: \"kubernetes.io/projected/80a3b4ee-fc40-43c7-8001-a7f02844b05d-kube-api-access-pq6qk\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.542860 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.542982 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.543678 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.543821 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-config\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.546488 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.582705 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq6qk\" (UniqueName: \"kubernetes.io/projected/80a3b4ee-fc40-43c7-8001-a7f02844b05d-kube-api-access-pq6qk\") pod \"dnsmasq-dns-5c79d794d7-j22rn\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.719637 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.989133 4746 generic.go:334] "Generic (PLEG): container finished" podID="480c95bc-8a38-4304-af6c-3118a7571459" containerID="08f26fd601ce04a319b4451e269e57884f7cb5097c43e042458fa8cd28fb3917" exitCode=0 Dec 11 10:11:43 crc kubenswrapper[4746]: I1211 10:11:43.989864 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2bptw" event={"ID":"480c95bc-8a38-4304-af6c-3118a7571459","Type":"ContainerDied","Data":"08f26fd601ce04a319b4451e269e57884f7cb5097c43e042458fa8cd28fb3917"} Dec 11 10:11:44 crc kubenswrapper[4746]: I1211 10:11:44.285235 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-j22rn"] Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.016927 4746 generic.go:334] "Generic (PLEG): container finished" podID="80a3b4ee-fc40-43c7-8001-a7f02844b05d" containerID="d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818" exitCode=0 Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.017011 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" event={"ID":"80a3b4ee-fc40-43c7-8001-a7f02844b05d","Type":"ContainerDied","Data":"d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818"} Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.017304 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" event={"ID":"80a3b4ee-fc40-43c7-8001-a7f02844b05d","Type":"ContainerStarted","Data":"3c58906177ce7fe2cf5f27be45420fad7a906b4d0e7b18b7a8f76f2b825543a6"} Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.019657 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fh8cg" event={"ID":"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983","Type":"ContainerStarted","Data":"60ae15292efbbacd5588fccd317e73efd7b4778c93ec9d786e7ee9d557a4f772"} Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.080858 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-fh8cg" podStartSLOduration=3.117135815 podStartE2EDuration="33.080834827s" podCreationTimestamp="2025-12-11 10:11:12 +0000 UTC" firstStartedPulling="2025-12-11 10:11:13.258221605 +0000 UTC m=+1046.118084918" lastFinishedPulling="2025-12-11 10:11:43.221920617 +0000 UTC m=+1076.081783930" observedRunningTime="2025-12-11 10:11:45.075554866 +0000 UTC m=+1077.935418189" watchObservedRunningTime="2025-12-11 10:11:45.080834827 +0000 UTC m=+1077.940698130" Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.392789 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.409714 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-combined-ca-bundle\") pod \"480c95bc-8a38-4304-af6c-3118a7571459\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.410121 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-config-data\") pod \"480c95bc-8a38-4304-af6c-3118a7571459\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.410181 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22rnh\" (UniqueName: \"kubernetes.io/projected/480c95bc-8a38-4304-af6c-3118a7571459-kube-api-access-22rnh\") pod \"480c95bc-8a38-4304-af6c-3118a7571459\" (UID: \"480c95bc-8a38-4304-af6c-3118a7571459\") " Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.415916 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/480c95bc-8a38-4304-af6c-3118a7571459-kube-api-access-22rnh" (OuterVolumeSpecName: "kube-api-access-22rnh") pod "480c95bc-8a38-4304-af6c-3118a7571459" (UID: "480c95bc-8a38-4304-af6c-3118a7571459"). InnerVolumeSpecName "kube-api-access-22rnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.459420 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "480c95bc-8a38-4304-af6c-3118a7571459" (UID: "480c95bc-8a38-4304-af6c-3118a7571459"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.486648 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-config-data" (OuterVolumeSpecName: "config-data") pod "480c95bc-8a38-4304-af6c-3118a7571459" (UID: "480c95bc-8a38-4304-af6c-3118a7571459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.512875 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.512916 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22rnh\" (UniqueName: \"kubernetes.io/projected/480c95bc-8a38-4304-af6c-3118a7571459-kube-api-access-22rnh\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:45 crc kubenswrapper[4746]: I1211 10:11:45.512931 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/480c95bc-8a38-4304-af6c-3118a7571459-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.030537 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-2bptw" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.030755 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-2bptw" event={"ID":"480c95bc-8a38-4304-af6c-3118a7571459","Type":"ContainerDied","Data":"634b033db7f2c86be414fc1aba1723d708b3b7e2548b8b08ceea90f33f474e4c"} Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.030854 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="634b033db7f2c86be414fc1aba1723d708b3b7e2548b8b08ceea90f33f474e4c" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.034517 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" event={"ID":"80a3b4ee-fc40-43c7-8001-a7f02844b05d","Type":"ContainerStarted","Data":"e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95"} Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.036139 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.078644 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" podStartSLOduration=3.07860489 podStartE2EDuration="3.07860489s" podCreationTimestamp="2025-12-11 10:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:11:46.059953339 +0000 UTC m=+1078.919816652" watchObservedRunningTime="2025-12-11 10:11:46.07860489 +0000 UTC m=+1078.938468203" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.301123 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-j22rn"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.328728 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b868669f-xjvzh"] Dec 11 10:11:46 crc kubenswrapper[4746]: E1211 10:11:46.329149 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="480c95bc-8a38-4304-af6c-3118a7571459" containerName="keystone-db-sync" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.329164 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="480c95bc-8a38-4304-af6c-3118a7571459" containerName="keystone-db-sync" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.329459 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="480c95bc-8a38-4304-af6c-3118a7571459" containerName="keystone-db-sync" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.332114 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.356883 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-xjvzh"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.412581 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lmt65"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.414777 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.419110 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.419430 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-slbzm" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.420179 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.420353 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.420620 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.421613 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lmt65"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.439331 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.439389 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6lz\" (UniqueName: \"kubernetes.io/projected/d8b8f7f8-d758-4835-92d2-f39a243788bf-kube-api-access-tc6lz\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.439421 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.439457 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.439500 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-svc\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.439552 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-config\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.542386 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-credential-keys\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.542697 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-svc\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.542780 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-combined-ca-bundle\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.542886 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-scripts\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.542963 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-config-data\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.543084 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-config\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.543163 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-fernet-keys\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.543261 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.543356 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkjj\" (UniqueName: \"kubernetes.io/projected/f121ad60-591f-4eb9-8ef0-b6147685dbd5-kube-api-access-5tkjj\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.543431 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc6lz\" (UniqueName: \"kubernetes.io/projected/d8b8f7f8-d758-4835-92d2-f39a243788bf-kube-api-access-tc6lz\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.543503 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.543604 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.544268 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-svc\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.544544 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-swift-storage-0\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.545272 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-config\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.545585 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-sb\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.546348 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-nb\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.569394 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-856d48c4bf-f4txn"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.573039 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.575890 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-cj66l" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.576205 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc6lz\" (UniqueName: \"kubernetes.io/projected/d8b8f7f8-d758-4835-92d2-f39a243788bf-kube-api-access-tc6lz\") pod \"dnsmasq-dns-5b868669f-xjvzh\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.577851 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.578210 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.583665 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.590179 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-856d48c4bf-f4txn"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.647580 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-config-data\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.647737 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-credential-keys\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.647894 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-combined-ca-bundle\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.647947 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-scripts\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.647989 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-config-data\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.648015 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b817e797-6fd0-4b06-94a5-32ade5baf2f5-logs\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.648037 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st48x\" (UniqueName: \"kubernetes.io/projected/b817e797-6fd0-4b06-94a5-32ade5baf2f5-kube-api-access-st48x\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.648075 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-scripts\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.648101 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b817e797-6fd0-4b06-94a5-32ade5baf2f5-horizon-secret-key\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.648152 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-fernet-keys\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.648219 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tkjj\" (UniqueName: \"kubernetes.io/projected/f121ad60-591f-4eb9-8ef0-b6147685dbd5-kube-api-access-5tkjj\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.652498 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-credential-keys\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.655938 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-combined-ca-bundle\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.658895 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-config-data\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.659190 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-4n4rh"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.660605 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.663773 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9lnrp" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.664147 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.664196 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-scripts\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.664339 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.676543 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.680673 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4n4rh"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.686248 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-fernet-keys\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.687920 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tkjj\" (UniqueName: \"kubernetes.io/projected/f121ad60-591f-4eb9-8ef0-b6147685dbd5-kube-api-access-5tkjj\") pod \"keystone-bootstrap-lmt65\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.747781 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.769871 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-db-sync-config-data\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.769952 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-config-data\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.769991 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-config-data\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.770014 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-scripts\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.770098 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-combined-ca-bundle\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.770116 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b817e797-6fd0-4b06-94a5-32ade5baf2f5-logs\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.770132 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4080f404-aff6-42e2-856c-5b347b908963-etc-machine-id\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.770165 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-st48x\" (UniqueName: \"kubernetes.io/projected/b817e797-6fd0-4b06-94a5-32ade5baf2f5-kube-api-access-st48x\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.770188 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-scripts\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.770213 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b817e797-6fd0-4b06-94a5-32ade5baf2f5-horizon-secret-key\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.770247 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxqw\" (UniqueName: \"kubernetes.io/projected/4080f404-aff6-42e2-856c-5b347b908963-kube-api-access-fpxqw\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.771445 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-config-data\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.771678 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b817e797-6fd0-4b06-94a5-32ade5baf2f5-logs\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.783778 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-scripts\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.793873 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b817e797-6fd0-4b06-94a5-32ade5baf2f5-horizon-secret-key\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.799068 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.807026 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.815284 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.824284 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.853937 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.854642 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-st48x\" (UniqueName: \"kubernetes.io/projected/b817e797-6fd0-4b06-94a5-32ade5baf2f5-kube-api-access-st48x\") pod \"horizon-856d48c4bf-f4txn\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.871994 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-scripts\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872058 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-db-sync-config-data\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872078 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872113 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-config-data\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872137 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-config-data\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872174 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-scripts\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872221 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5ck4\" (UniqueName: \"kubernetes.io/projected/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-kube-api-access-x5ck4\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872238 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-combined-ca-bundle\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872255 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872272 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4080f404-aff6-42e2-856c-5b347b908963-etc-machine-id\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872303 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-log-httpd\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872323 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpxqw\" (UniqueName: \"kubernetes.io/projected/4080f404-aff6-42e2-856c-5b347b908963-kube-api-access-fpxqw\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.872341 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-run-httpd\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.873927 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4080f404-aff6-42e2-856c-5b347b908963-etc-machine-id\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.877855 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-config-data\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.880297 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-combined-ca-bundle\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.881921 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-db-sync-config-data\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.883212 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-699dbbd6bf-zhtzw"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.888799 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.900494 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-699dbbd6bf-zhtzw"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.903063 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpxqw\" (UniqueName: \"kubernetes.io/projected/4080f404-aff6-42e2-856c-5b347b908963-kube-api-access-fpxqw\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.903866 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-scripts\") pod \"cinder-db-sync-4n4rh\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.917821 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-j82k7"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.919725 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.925205 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.925642 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nstwz" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.925897 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.963414 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.971119 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-j82k7"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975350 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-config-data\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975454 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-config-data\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975499 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5ck4\" (UniqueName: \"kubernetes.io/projected/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-kube-api-access-x5ck4\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975520 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975539 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbkng\" (UniqueName: \"kubernetes.io/projected/857a94b0-85c3-458e-b0f9-560e834bedda-kube-api-access-hbkng\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975562 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-scripts\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975585 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857a94b0-85c3-458e-b0f9-560e834bedda-logs\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975610 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-log-httpd\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975630 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/857a94b0-85c3-458e-b0f9-560e834bedda-horizon-secret-key\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975656 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-combined-ca-bundle\") pod \"neutron-db-sync-j82k7\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975677 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-run-httpd\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975721 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-scripts\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975739 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rg8\" (UniqueName: \"kubernetes.io/projected/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-kube-api-access-k5rg8\") pod \"neutron-db-sync-j82k7\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975758 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-config\") pod \"neutron-db-sync-j82k7\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.975775 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.977337 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-run-httpd\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.977670 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-log-httpd\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.980005 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-m9x6s"] Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.981691 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.983103 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.983349 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.986783 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-config-data\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.986929 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 11 10:11:46 crc kubenswrapper[4746]: I1211 10:11:46.987132 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-pjvww" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.000135 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m9x6s"] Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.037378 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-scripts\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.038155 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.053318 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-xjvzh"] Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.071377 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5ck4\" (UniqueName: \"kubernetes.io/projected/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-kube-api-access-x5ck4\") pod \"ceilometer-0\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " pod="openstack/ceilometer-0" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.078673 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/857a94b0-85c3-458e-b0f9-560e834bedda-horizon-secret-key\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.078728 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-combined-ca-bundle\") pod \"neutron-db-sync-j82k7\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.078793 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9962918f-3f76-42ae-b292-0c2300106516-logs\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.078817 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rg8\" (UniqueName: \"kubernetes.io/projected/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-kube-api-access-k5rg8\") pod \"neutron-db-sync-j82k7\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.078837 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-config\") pod \"neutron-db-sync-j82k7\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.078874 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-combined-ca-bundle\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.078896 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-scripts\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.079007 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-config-data\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.079076 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbkng\" (UniqueName: \"kubernetes.io/projected/857a94b0-85c3-458e-b0f9-560e834bedda-kube-api-access-hbkng\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.079111 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-scripts\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.079138 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54gq6\" (UniqueName: \"kubernetes.io/projected/9962918f-3f76-42ae-b292-0c2300106516-kube-api-access-54gq6\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.079160 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-config-data\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.079185 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857a94b0-85c3-458e-b0f9-560e834bedda-logs\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.080018 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857a94b0-85c3-458e-b0f9-560e834bedda-logs\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.083183 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-scripts\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.089543 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-combined-ca-bundle\") pod \"neutron-db-sync-j82k7\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.102491 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/857a94b0-85c3-458e-b0f9-560e834bedda-horizon-secret-key\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.106350 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-config-data\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.124963 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.124538 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rg8\" (UniqueName: \"kubernetes.io/projected/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-kube-api-access-k5rg8\") pod \"neutron-db-sync-j82k7\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.131072 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-lznjj"] Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.134514 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.135911 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-config\") pod \"neutron-db-sync-j82k7\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.137843 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbkng\" (UniqueName: \"kubernetes.io/projected/857a94b0-85c3-458e-b0f9-560e834bedda-kube-api-access-hbkng\") pod \"horizon-699dbbd6bf-zhtzw\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.138436 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.138747 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-js5ns" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.141693 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.159359 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-24z6n"] Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.164011 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.170273 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lznjj"] Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.189992 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-config\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.190135 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54gq6\" (UniqueName: \"kubernetes.io/projected/9962918f-3f76-42ae-b292-0c2300106516-kube-api-access-54gq6\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.190213 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-config-data\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.190331 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-db-sync-config-data\") pod \"barbican-db-sync-lznjj\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.190463 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-combined-ca-bundle\") pod \"barbican-db-sync-lznjj\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.190679 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.190719 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.190868 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.190971 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9962918f-3f76-42ae-b292-0c2300106516-logs\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.191099 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-combined-ca-bundle\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.191129 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-scripts\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.191453 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btq5g\" (UniqueName: \"kubernetes.io/projected/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-kube-api-access-btq5g\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.191654 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4hg\" (UniqueName: \"kubernetes.io/projected/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-kube-api-access-wf4hg\") pod \"barbican-db-sync-lznjj\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.191736 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-svc\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.194392 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9962918f-3f76-42ae-b292-0c2300106516-logs\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.198320 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-24z6n"] Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.200331 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-config-data\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.203074 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-scripts\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.204176 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-combined-ca-bundle\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.222739 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.223430 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54gq6\" (UniqueName: \"kubernetes.io/projected/9962918f-3f76-42ae-b292-0c2300106516-kube-api-access-54gq6\") pod \"placement-db-sync-m9x6s\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.283199 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j82k7" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.294236 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.294295 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.294321 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.294417 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btq5g\" (UniqueName: \"kubernetes.io/projected/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-kube-api-access-btq5g\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.294467 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf4hg\" (UniqueName: \"kubernetes.io/projected/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-kube-api-access-wf4hg\") pod \"barbican-db-sync-lznjj\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.294498 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-svc\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.294566 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-config\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.294608 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-db-sync-config-data\") pod \"barbican-db-sync-lznjj\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.296026 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-combined-ca-bundle\") pod \"barbican-db-sync-lznjj\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.297146 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-sb\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.297994 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-nb\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.298011 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-swift-storage-0\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.298824 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-config\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.299139 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-svc\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.301957 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-combined-ca-bundle\") pod \"barbican-db-sync-lznjj\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.317685 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btq5g\" (UniqueName: \"kubernetes.io/projected/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-kube-api-access-btq5g\") pod \"dnsmasq-dns-cf78879c9-24z6n\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.320065 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf4hg\" (UniqueName: \"kubernetes.io/projected/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-kube-api-access-wf4hg\") pod \"barbican-db-sync-lznjj\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.358697 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-db-sync-config-data\") pod \"barbican-db-sync-lznjj\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.410920 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m9x6s" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.474457 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lznjj" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.512201 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.558849 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-xjvzh"] Dec 11 10:11:47 crc kubenswrapper[4746]: I1211 10:11:47.911513 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-856d48c4bf-f4txn"] Dec 11 10:11:47 crc kubenswrapper[4746]: W1211 10:11:47.919731 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb817e797_6fd0_4b06_94a5_32ade5baf2f5.slice/crio-dc4538d3fe68f45d8ccd0468d202d70d6145f75ec5ba453223b102578f38f578 WatchSource:0}: Error finding container dc4538d3fe68f45d8ccd0468d202d70d6145f75ec5ba453223b102578f38f578: Status 404 returned error can't find the container with id dc4538d3fe68f45d8ccd0468d202d70d6145f75ec5ba453223b102578f38f578 Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.121767 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lmt65"] Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.148220 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.165861 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856d48c4bf-f4txn" event={"ID":"b817e797-6fd0-4b06-94a5-32ade5baf2f5","Type":"ContainerStarted","Data":"dc4538d3fe68f45d8ccd0468d202d70d6145f75ec5ba453223b102578f38f578"} Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.168156 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lmt65" event={"ID":"f121ad60-591f-4eb9-8ef0-b6147685dbd5","Type":"ContainerStarted","Data":"439c9ebff6aa5a13963960216bb9c5e3581a3bb4ba986f079fc43f312b5344fd"} Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.172183 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-xjvzh" event={"ID":"d8b8f7f8-d758-4835-92d2-f39a243788bf","Type":"ContainerStarted","Data":"13ac6162389ff86ad8fa3930c18646b6b9c8635c087979a0e1fd9eab692efb6d"} Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.172374 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" podUID="80a3b4ee-fc40-43c7-8001-a7f02844b05d" containerName="dnsmasq-dns" containerID="cri-o://e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95" gracePeriod=10 Dec 11 10:11:48 crc kubenswrapper[4746]: W1211 10:11:48.190980 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4080f404_aff6_42e2_856c_5b347b908963.slice/crio-e4e61b2f0935ac932f43b2f65b93da4b6bd9d875cc540f5662f2032f25d21f8f WatchSource:0}: Error finding container e4e61b2f0935ac932f43b2f65b93da4b6bd9d875cc540f5662f2032f25d21f8f: Status 404 returned error can't find the container with id e4e61b2f0935ac932f43b2f65b93da4b6bd9d875cc540f5662f2032f25d21f8f Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.200117 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-4n4rh"] Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.387302 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-j82k7"] Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.405625 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.419846 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-699dbbd6bf-zhtzw"] Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.608269 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-24z6n"] Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.660097 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-lznjj"] Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.834557 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-m9x6s"] Dec 11 10:11:48 crc kubenswrapper[4746]: I1211 10:11:48.992197 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.182760 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq6qk\" (UniqueName: \"kubernetes.io/projected/80a3b4ee-fc40-43c7-8001-a7f02844b05d-kube-api-access-pq6qk\") pod \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.182838 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-sb\") pod \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.182926 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-nb\") pod \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.183011 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-svc\") pod \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.183030 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-swift-storage-0\") pod \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.183200 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-config\") pod \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\" (UID: \"80a3b4ee-fc40-43c7-8001-a7f02844b05d\") " Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.184759 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-699dbbd6bf-zhtzw" event={"ID":"857a94b0-85c3-458e-b0f9-560e834bedda","Type":"ContainerStarted","Data":"38bd66174b3faea1ae44ada2bb5f278461bc70c42116752d748d1c42495a131f"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.186237 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4n4rh" event={"ID":"4080f404-aff6-42e2-856c-5b347b908963","Type":"ContainerStarted","Data":"e4e61b2f0935ac932f43b2f65b93da4b6bd9d875cc540f5662f2032f25d21f8f"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.189772 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80a3b4ee-fc40-43c7-8001-a7f02844b05d-kube-api-access-pq6qk" (OuterVolumeSpecName: "kube-api-access-pq6qk") pod "80a3b4ee-fc40-43c7-8001-a7f02844b05d" (UID: "80a3b4ee-fc40-43c7-8001-a7f02844b05d"). InnerVolumeSpecName "kube-api-access-pq6qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.190353 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lznjj" event={"ID":"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc","Type":"ContainerStarted","Data":"3ece94411bd8eb0d81dc00e67f76a045ee02399feb076e2dd43be5af7fd08a57"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.191850 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j82k7" event={"ID":"4daad880-1f8c-4f37-b718-b8b9eb88d0f3","Type":"ContainerStarted","Data":"d539efced0f07c5cf02199c4e927d7a28b81ada0b57c67dc11a5d0678f447880"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.191877 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j82k7" event={"ID":"4daad880-1f8c-4f37-b718-b8b9eb88d0f3","Type":"ContainerStarted","Data":"8fdc0ffa4e25ae912f13ea47eec9c51fea90c4fc6a938dd581274d2fba01a4d2"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.194948 4746 generic.go:334] "Generic (PLEG): container finished" podID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" containerID="d942758c91e00bfc84c6304419c2eabb4dd2225b452525de33f3b153b11fb549" exitCode=0 Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.195011 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" event={"ID":"adc6dfbf-d8de-4c67-b56c-83127bd0dac5","Type":"ContainerDied","Data":"d942758c91e00bfc84c6304419c2eabb4dd2225b452525de33f3b153b11fb549"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.195033 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" event={"ID":"adc6dfbf-d8de-4c67-b56c-83127bd0dac5","Type":"ContainerStarted","Data":"be8475c50f38a4b6d36ce2722d9da079e4d37a6da763274288a2d22ba2ac0446"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.203271 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.202687 4746 generic.go:334] "Generic (PLEG): container finished" podID="80a3b4ee-fc40-43c7-8001-a7f02844b05d" containerID="e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95" exitCode=0 Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.203731 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" event={"ID":"80a3b4ee-fc40-43c7-8001-a7f02844b05d","Type":"ContainerDied","Data":"e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.203757 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-j22rn" event={"ID":"80a3b4ee-fc40-43c7-8001-a7f02844b05d","Type":"ContainerDied","Data":"3c58906177ce7fe2cf5f27be45420fad7a906b4d0e7b18b7a8f76f2b825543a6"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.203778 4746 scope.go:117] "RemoveContainer" containerID="e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.210544 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-j82k7" podStartSLOduration=3.210531332 podStartE2EDuration="3.210531332s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:11:49.209227877 +0000 UTC m=+1082.069091190" watchObservedRunningTime="2025-12-11 10:11:49.210531332 +0000 UTC m=+1082.070394645" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.232361 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lmt65" event={"ID":"f121ad60-591f-4eb9-8ef0-b6147685dbd5","Type":"ContainerStarted","Data":"b8e5c1fd39f2676af136ce6be36675b0dfe3dd5cfef885438408fa384e27032e"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.247175 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-config" (OuterVolumeSpecName: "config") pod "80a3b4ee-fc40-43c7-8001-a7f02844b05d" (UID: "80a3b4ee-fc40-43c7-8001-a7f02844b05d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.264281 4746 scope.go:117] "RemoveContainer" containerID="d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.264563 4746 generic.go:334] "Generic (PLEG): container finished" podID="d8b8f7f8-d758-4835-92d2-f39a243788bf" containerID="4f088117a6ed1597f98d4d93bc39c66fafa3ac57ee9fcd219330dc8dbf1d7f40" exitCode=0 Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.264593 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-xjvzh" event={"ID":"d8b8f7f8-d758-4835-92d2-f39a243788bf","Type":"ContainerDied","Data":"4f088117a6ed1597f98d4d93bc39c66fafa3ac57ee9fcd219330dc8dbf1d7f40"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.268285 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m9x6s" event={"ID":"9962918f-3f76-42ae-b292-0c2300106516","Type":"ContainerStarted","Data":"8de6d6b55b515ad7afc4ce59200f20fc3ccbed7b4a317de14e5ca14b2b0a5418"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.269637 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79","Type":"ContainerStarted","Data":"0ed3a4ffab091f56cf2e6f569aa2363e563d62efb15aa8f2f6d447b184887a94"} Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.275159 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80a3b4ee-fc40-43c7-8001-a7f02844b05d" (UID: "80a3b4ee-fc40-43c7-8001-a7f02844b05d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.280850 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80a3b4ee-fc40-43c7-8001-a7f02844b05d" (UID: "80a3b4ee-fc40-43c7-8001-a7f02844b05d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.282375 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80a3b4ee-fc40-43c7-8001-a7f02844b05d" (UID: "80a3b4ee-fc40-43c7-8001-a7f02844b05d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.286897 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.286925 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq6qk\" (UniqueName: \"kubernetes.io/projected/80a3b4ee-fc40-43c7-8001-a7f02844b05d-kube-api-access-pq6qk\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.286940 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.286952 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.286962 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.334095 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lmt65" podStartSLOduration=3.334073783 podStartE2EDuration="3.334073783s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:11:49.280608716 +0000 UTC m=+1082.140472029" watchObservedRunningTime="2025-12-11 10:11:49.334073783 +0000 UTC m=+1082.193937096" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.334484 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "80a3b4ee-fc40-43c7-8001-a7f02844b05d" (UID: "80a3b4ee-fc40-43c7-8001-a7f02844b05d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.387878 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80a3b4ee-fc40-43c7-8001-a7f02844b05d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.433217 4746 scope.go:117] "RemoveContainer" containerID="e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95" Dec 11 10:11:49 crc kubenswrapper[4746]: E1211 10:11:49.438228 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95\": container with ID starting with e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95 not found: ID does not exist" containerID="e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.438302 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95"} err="failed to get container status \"e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95\": rpc error: code = NotFound desc = could not find container \"e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95\": container with ID starting with e816bda9c38c83b51e7486a8309941ed0bf3f0d9a9ea0bf1bcd4704441d30c95 not found: ID does not exist" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.438330 4746 scope.go:117] "RemoveContainer" containerID="d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818" Dec 11 10:11:49 crc kubenswrapper[4746]: E1211 10:11:49.442150 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818\": container with ID starting with d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818 not found: ID does not exist" containerID="d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.442186 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818"} err="failed to get container status \"d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818\": rpc error: code = NotFound desc = could not find container \"d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818\": container with ID starting with d0b32a87328f5da062edccefa0e8ef6096ee7cec2302b3f67c7a117312554818 not found: ID does not exist" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.571209 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-j22rn"] Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.582058 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-j22rn"] Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.753770 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80a3b4ee-fc40-43c7-8001-a7f02844b05d" path="/var/lib/kubelet/pods/80a3b4ee-fc40-43c7-8001-a7f02844b05d/volumes" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.754749 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-699dbbd6bf-zhtzw"] Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.765249 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5794b6ff69-gcvdv"] Dec 11 10:11:49 crc kubenswrapper[4746]: E1211 10:11:49.765687 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a3b4ee-fc40-43c7-8001-a7f02844b05d" containerName="init" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.765704 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a3b4ee-fc40-43c7-8001-a7f02844b05d" containerName="init" Dec 11 10:11:49 crc kubenswrapper[4746]: E1211 10:11:49.765728 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80a3b4ee-fc40-43c7-8001-a7f02844b05d" containerName="dnsmasq-dns" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.765735 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="80a3b4ee-fc40-43c7-8001-a7f02844b05d" containerName="dnsmasq-dns" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.770626 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="80a3b4ee-fc40-43c7-8001-a7f02844b05d" containerName="dnsmasq-dns" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.771988 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.809538 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5794b6ff69-gcvdv"] Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.818882 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.900927 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56188568-7337-4fa9-bcc5-25e02aa0366a-logs\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.901007 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b567r\" (UniqueName: \"kubernetes.io/projected/56188568-7337-4fa9-bcc5-25e02aa0366a-kube-api-access-b567r\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.901074 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-scripts\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.901105 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56188568-7337-4fa9-bcc5-25e02aa0366a-horizon-secret-key\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.901144 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-config-data\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:49 crc kubenswrapper[4746]: I1211 10:11:49.926885 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.002765 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56188568-7337-4fa9-bcc5-25e02aa0366a-logs\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.002834 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b567r\" (UniqueName: \"kubernetes.io/projected/56188568-7337-4fa9-bcc5-25e02aa0366a-kube-api-access-b567r\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.002859 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-scripts\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.002883 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56188568-7337-4fa9-bcc5-25e02aa0366a-horizon-secret-key\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.002901 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-config-data\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.004213 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-config-data\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.004433 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56188568-7337-4fa9-bcc5-25e02aa0366a-logs\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.005140 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-scripts\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.010744 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56188568-7337-4fa9-bcc5-25e02aa0366a-horizon-secret-key\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.025654 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b567r\" (UniqueName: \"kubernetes.io/projected/56188568-7337-4fa9-bcc5-25e02aa0366a-kube-api-access-b567r\") pod \"horizon-5794b6ff69-gcvdv\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.104153 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-swift-storage-0\") pod \"d8b8f7f8-d758-4835-92d2-f39a243788bf\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.104193 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tc6lz\" (UniqueName: \"kubernetes.io/projected/d8b8f7f8-d758-4835-92d2-f39a243788bf-kube-api-access-tc6lz\") pod \"d8b8f7f8-d758-4835-92d2-f39a243788bf\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.104284 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-sb\") pod \"d8b8f7f8-d758-4835-92d2-f39a243788bf\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.104347 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-nb\") pod \"d8b8f7f8-d758-4835-92d2-f39a243788bf\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.104456 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-svc\") pod \"d8b8f7f8-d758-4835-92d2-f39a243788bf\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.104504 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-config\") pod \"d8b8f7f8-d758-4835-92d2-f39a243788bf\" (UID: \"d8b8f7f8-d758-4835-92d2-f39a243788bf\") " Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.109142 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8b8f7f8-d758-4835-92d2-f39a243788bf-kube-api-access-tc6lz" (OuterVolumeSpecName: "kube-api-access-tc6lz") pod "d8b8f7f8-d758-4835-92d2-f39a243788bf" (UID: "d8b8f7f8-d758-4835-92d2-f39a243788bf"). InnerVolumeSpecName "kube-api-access-tc6lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.133966 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d8b8f7f8-d758-4835-92d2-f39a243788bf" (UID: "d8b8f7f8-d758-4835-92d2-f39a243788bf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.135118 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8b8f7f8-d758-4835-92d2-f39a243788bf" (UID: "d8b8f7f8-d758-4835-92d2-f39a243788bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.137871 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.143339 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8b8f7f8-d758-4835-92d2-f39a243788bf" (UID: "d8b8f7f8-d758-4835-92d2-f39a243788bf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.147857 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-config" (OuterVolumeSpecName: "config") pod "d8b8f7f8-d758-4835-92d2-f39a243788bf" (UID: "d8b8f7f8-d758-4835-92d2-f39a243788bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.162544 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8b8f7f8-d758-4835-92d2-f39a243788bf" (UID: "d8b8f7f8-d758-4835-92d2-f39a243788bf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.209476 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.209520 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.209532 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tc6lz\" (UniqueName: \"kubernetes.io/projected/d8b8f7f8-d758-4835-92d2-f39a243788bf-kube-api-access-tc6lz\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.209542 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.209553 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.209570 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8b8f7f8-d758-4835-92d2-f39a243788bf-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.316510 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b868669f-xjvzh" event={"ID":"d8b8f7f8-d758-4835-92d2-f39a243788bf","Type":"ContainerDied","Data":"13ac6162389ff86ad8fa3930c18646b6b9c8635c087979a0e1fd9eab692efb6d"} Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.316877 4746 scope.go:117] "RemoveContainer" containerID="4f088117a6ed1597f98d4d93bc39c66fafa3ac57ee9fcd219330dc8dbf1d7f40" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.316769 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b868669f-xjvzh" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.342253 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" event={"ID":"adc6dfbf-d8de-4c67-b56c-83127bd0dac5","Type":"ContainerStarted","Data":"f885efe0e4e9ccfd18d9dfb95dcb0fac3a0da25fadcaebbe269ec15c28383fc1"} Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.386392 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" podStartSLOduration=3.386373731 podStartE2EDuration="3.386373731s" podCreationTimestamp="2025-12-11 10:11:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:11:50.373515645 +0000 UTC m=+1083.233378958" watchObservedRunningTime="2025-12-11 10:11:50.386373731 +0000 UTC m=+1083.246237044" Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.426229 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-xjvzh"] Dec 11 10:11:50 crc kubenswrapper[4746]: I1211 10:11:50.441618 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b868669f-xjvzh"] Dec 11 10:11:51 crc kubenswrapper[4746]: I1211 10:11:51.108231 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5794b6ff69-gcvdv"] Dec 11 10:11:51 crc kubenswrapper[4746]: I1211 10:11:51.369974 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5794b6ff69-gcvdv" event={"ID":"56188568-7337-4fa9-bcc5-25e02aa0366a","Type":"ContainerStarted","Data":"da0c3fff85fd139362a2b22fe1b2f2d8c36bea191e1a6140b182b8e35c336b8b"} Dec 11 10:11:51 crc kubenswrapper[4746]: I1211 10:11:51.376694 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:51 crc kubenswrapper[4746]: I1211 10:11:51.641467 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8b8f7f8-d758-4835-92d2-f39a243788bf" path="/var/lib/kubelet/pods/d8b8f7f8-d758-4835-92d2-f39a243788bf/volumes" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.600978 4746 generic.go:334] "Generic (PLEG): container finished" podID="f121ad60-591f-4eb9-8ef0-b6147685dbd5" containerID="b8e5c1fd39f2676af136ce6be36675b0dfe3dd5cfef885438408fa384e27032e" exitCode=0 Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.601059 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lmt65" event={"ID":"f121ad60-591f-4eb9-8ef0-b6147685dbd5","Type":"ContainerDied","Data":"b8e5c1fd39f2676af136ce6be36675b0dfe3dd5cfef885438408fa384e27032e"} Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.758835 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-856d48c4bf-f4txn"] Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.812900 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-77c4bd4944-jbg88"] Dec 11 10:11:55 crc kubenswrapper[4746]: E1211 10:11:55.813289 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8b8f7f8-d758-4835-92d2-f39a243788bf" containerName="init" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.813306 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8b8f7f8-d758-4835-92d2-f39a243788bf" containerName="init" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.813550 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8b8f7f8-d758-4835-92d2-f39a243788bf" containerName="init" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.818866 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.836144 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.838866 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77c4bd4944-jbg88"] Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.887130 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5794b6ff69-gcvdv"] Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.916970 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-b7f654f86-sh94c"] Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.919094 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.931357 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-tls-certs\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.931407 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhxk\" (UniqueName: \"kubernetes.io/projected/016a98db-33b2-4acb-a360-5e8a55aebd6c-kube-api-access-xhhxk\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.931468 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/016a98db-33b2-4acb-a360-5e8a55aebd6c-logs\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.931545 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-scripts\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.931609 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-secret-key\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.931636 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-config-data\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.931655 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-combined-ca-bundle\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:55 crc kubenswrapper[4746]: I1211 10:11:55.941734 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b7f654f86-sh94c"] Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034236 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-horizon-tls-certs\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034312 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-tls-certs\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034349 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhxk\" (UniqueName: \"kubernetes.io/projected/016a98db-33b2-4acb-a360-5e8a55aebd6c-kube-api-access-xhhxk\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034396 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-logs\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034445 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/016a98db-33b2-4acb-a360-5e8a55aebd6c-logs\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034491 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-config-data\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034539 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-scripts\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034616 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-horizon-secret-key\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034650 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-secret-key\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034677 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-scripts\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034694 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-combined-ca-bundle\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034726 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-config-data\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034744 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-combined-ca-bundle\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.034776 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmb95\" (UniqueName: \"kubernetes.io/projected/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-kube-api-access-pmb95\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.036198 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/016a98db-33b2-4acb-a360-5e8a55aebd6c-logs\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.037602 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-config-data\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.038916 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-scripts\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.043985 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-tls-certs\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.044330 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-combined-ca-bundle\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.047995 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-secret-key\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.058033 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhxk\" (UniqueName: \"kubernetes.io/projected/016a98db-33b2-4acb-a360-5e8a55aebd6c-kube-api-access-xhhxk\") pod \"horizon-77c4bd4944-jbg88\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.137146 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-combined-ca-bundle\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.137262 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-scripts\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.137333 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmb95\" (UniqueName: \"kubernetes.io/projected/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-kube-api-access-pmb95\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.137392 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-horizon-tls-certs\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.137433 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-logs\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.137491 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-config-data\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.137563 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-horizon-secret-key\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.140718 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-scripts\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.141088 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-logs\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.142253 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-config-data\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.142728 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-horizon-secret-key\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.144463 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-combined-ca-bundle\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.145955 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-horizon-tls-certs\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.155466 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.165103 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmb95\" (UniqueName: \"kubernetes.io/projected/b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81-kube-api-access-pmb95\") pod \"horizon-b7f654f86-sh94c\" (UID: \"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81\") " pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:56 crc kubenswrapper[4746]: I1211 10:11:56.248139 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:11:59 crc kubenswrapper[4746]: I1211 10:11:57.516275 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:11:59 crc kubenswrapper[4746]: I1211 10:11:57.768132 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ckzx7"] Dec 11 10:11:59 crc kubenswrapper[4746]: I1211 10:11:57.768603 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="dnsmasq-dns" containerID="cri-o://906f65b562849c513e3290bbefd48980bea8ec54dc852a875a265e6f1ceeade8" gracePeriod=10 Dec 11 10:11:59 crc kubenswrapper[4746]: I1211 10:11:58.918562 4746 generic.go:334] "Generic (PLEG): container finished" podID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerID="906f65b562849c513e3290bbefd48980bea8ec54dc852a875a265e6f1ceeade8" exitCode=0 Dec 11 10:11:59 crc kubenswrapper[4746]: I1211 10:11:58.918625 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" event={"ID":"3ee5c96f-8699-41e0-9318-4a5ad8af233d","Type":"ContainerDied","Data":"906f65b562849c513e3290bbefd48980bea8ec54dc852a875a265e6f1ceeade8"} Dec 11 10:11:59 crc kubenswrapper[4746]: I1211 10:11:59.729531 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Dec 11 10:12:04 crc kubenswrapper[4746]: I1211 10:12:04.729979 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Dec 11 10:12:06 crc kubenswrapper[4746]: E1211 10:12:06.905241 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 11 10:12:06 crc kubenswrapper[4746]: E1211 10:12:06.906546 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndbh65fh694h655h686h5c6h54fh77h5cfh5b9h686h558h67h695h5d9hbfh658h5c9h595h59dh598h86h688hb5h64fh5bh65dh65fh598h544h5b9h564q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-st48x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-856d48c4bf-f4txn_openstack(b817e797-6fd0-4b06-94a5-32ade5baf2f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:12:06 crc kubenswrapper[4746]: E1211 10:12:06.914356 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 11 10:12:06 crc kubenswrapper[4746]: E1211 10:12:06.914674 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n77h549h664h5h698h5b6h56ch5c9h5c7h9h686h5c5h5c5hdch659h5fchdch58ch689h5fch79h66ch54fhf8h57bh668h58ch98h66bh574hc6h697q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b567r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5794b6ff69-gcvdv_openstack(56188568-7337-4fa9-bcc5-25e02aa0366a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:12:06 crc kubenswrapper[4746]: E1211 10:12:06.915823 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-856d48c4bf-f4txn" podUID="b817e797-6fd0-4b06-94a5-32ade5baf2f5" Dec 11 10:12:06 crc kubenswrapper[4746]: E1211 10:12:06.916875 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5794b6ff69-gcvdv" podUID="56188568-7337-4fa9-bcc5-25e02aa0366a" Dec 11 10:12:06 crc kubenswrapper[4746]: E1211 10:12:06.965579 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Dec 11 10:12:06 crc kubenswrapper[4746]: E1211 10:12:06.966428 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n577h5f6h656hbch677h595h5b8h56chc5h678h655h55ch54ch5bbh5fh5f4h5f7h546h68h5fdh575h59dh686h55fh5c4h99h656h79h679h644h68bhfdq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbkng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-699dbbd6bf-zhtzw_openstack(857a94b0-85c3-458e-b0f9-560e834bedda): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:12:06 crc kubenswrapper[4746]: E1211 10:12:06.969314 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-699dbbd6bf-zhtzw" podUID="857a94b0-85c3-458e-b0f9-560e834bedda" Dec 11 10:12:07 crc kubenswrapper[4746]: I1211 10:12:07.584254 4746 generic.go:334] "Generic (PLEG): container finished" podID="bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" containerID="60ae15292efbbacd5588fccd317e73efd7b4778c93ec9d786e7ee9d557a4f772" exitCode=0 Dec 11 10:12:07 crc kubenswrapper[4746]: I1211 10:12:07.584304 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fh8cg" event={"ID":"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983","Type":"ContainerDied","Data":"60ae15292efbbacd5588fccd317e73efd7b4778c93ec9d786e7ee9d557a4f772"} Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.136254 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.327012 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-combined-ca-bundle\") pod \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.327240 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-scripts\") pod \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.327355 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tkjj\" (UniqueName: \"kubernetes.io/projected/f121ad60-591f-4eb9-8ef0-b6147685dbd5-kube-api-access-5tkjj\") pod \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.327541 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-config-data\") pod \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.327690 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-fernet-keys\") pod \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.327741 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-credential-keys\") pod \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\" (UID: \"f121ad60-591f-4eb9-8ef0-b6147685dbd5\") " Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.337487 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-scripts" (OuterVolumeSpecName: "scripts") pod "f121ad60-591f-4eb9-8ef0-b6147685dbd5" (UID: "f121ad60-591f-4eb9-8ef0-b6147685dbd5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.337721 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f121ad60-591f-4eb9-8ef0-b6147685dbd5" (UID: "f121ad60-591f-4eb9-8ef0-b6147685dbd5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.337909 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f121ad60-591f-4eb9-8ef0-b6147685dbd5-kube-api-access-5tkjj" (OuterVolumeSpecName: "kube-api-access-5tkjj") pod "f121ad60-591f-4eb9-8ef0-b6147685dbd5" (UID: "f121ad60-591f-4eb9-8ef0-b6147685dbd5"). InnerVolumeSpecName "kube-api-access-5tkjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.338702 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f121ad60-591f-4eb9-8ef0-b6147685dbd5" (UID: "f121ad60-591f-4eb9-8ef0-b6147685dbd5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.358836 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-config-data" (OuterVolumeSpecName: "config-data") pod "f121ad60-591f-4eb9-8ef0-b6147685dbd5" (UID: "f121ad60-591f-4eb9-8ef0-b6147685dbd5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.359415 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f121ad60-591f-4eb9-8ef0-b6147685dbd5" (UID: "f121ad60-591f-4eb9-8ef0-b6147685dbd5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.430616 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.430680 4746 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.430704 4746 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.430827 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.430854 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f121ad60-591f-4eb9-8ef0-b6147685dbd5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.430876 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tkjj\" (UniqueName: \"kubernetes.io/projected/f121ad60-591f-4eb9-8ef0-b6147685dbd5-kube-api-access-5tkjj\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.610114 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lmt65" event={"ID":"f121ad60-591f-4eb9-8ef0-b6147685dbd5","Type":"ContainerDied","Data":"439c9ebff6aa5a13963960216bb9c5e3581a3bb4ba986f079fc43f312b5344fd"} Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.610173 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="439c9ebff6aa5a13963960216bb9c5e3581a3bb4ba986f079fc43f312b5344fd" Dec 11 10:12:09 crc kubenswrapper[4746]: I1211 10:12:09.610265 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lmt65" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.262035 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lmt65"] Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.275997 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lmt65"] Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.359099 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-nxg7v"] Dec 11 10:12:10 crc kubenswrapper[4746]: E1211 10:12:10.359610 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f121ad60-591f-4eb9-8ef0-b6147685dbd5" containerName="keystone-bootstrap" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.359630 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f121ad60-591f-4eb9-8ef0-b6147685dbd5" containerName="keystone-bootstrap" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.359842 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f121ad60-591f-4eb9-8ef0-b6147685dbd5" containerName="keystone-bootstrap" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.360915 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.363399 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-combined-ca-bundle\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.363474 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-config-data\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.363666 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-fernet-keys\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.363747 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-scripts\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.364157 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqsrh\" (UniqueName: \"kubernetes.io/projected/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-kube-api-access-qqsrh\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.364198 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.364280 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-credential-keys\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.364745 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.365322 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.365466 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-slbzm" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.365527 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.394225 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nxg7v"] Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.466496 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-credential-keys\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.466614 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-combined-ca-bundle\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.466646 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-config-data\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.466720 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-fernet-keys\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.466752 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-scripts\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.466799 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqsrh\" (UniqueName: \"kubernetes.io/projected/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-kube-api-access-qqsrh\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.472734 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-credential-keys\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.472782 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-combined-ca-bundle\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.474646 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-scripts\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.477443 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-config-data\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.477964 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-fernet-keys\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.487195 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqsrh\" (UniqueName: \"kubernetes.io/projected/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-kube-api-access-qqsrh\") pod \"keystone-bootstrap-nxg7v\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:10 crc kubenswrapper[4746]: I1211 10:12:10.711421 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:11 crc kubenswrapper[4746]: I1211 10:12:11.652159 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f121ad60-591f-4eb9-8ef0-b6147685dbd5" path="/var/lib/kubelet/pods/f121ad60-591f-4eb9-8ef0-b6147685dbd5/volumes" Dec 11 10:12:14 crc kubenswrapper[4746]: I1211 10:12:14.729263 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Dec 11 10:12:14 crc kubenswrapper[4746]: I1211 10:12:14.730210 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:12:19 crc kubenswrapper[4746]: I1211 10:12:19.730849 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Dec 11 10:12:21 crc kubenswrapper[4746]: E1211 10:12:21.092466 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 11 10:12:21 crc kubenswrapper[4746]: E1211 10:12:21.092773 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wf4hg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-lznjj_openstack(c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:12:21 crc kubenswrapper[4746]: E1211 10:12:21.093969 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-lznjj" podUID="c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc" Dec 11 10:12:21 crc kubenswrapper[4746]: E1211 10:12:21.783383 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-lznjj" podUID="c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc" Dec 11 10:12:22 crc kubenswrapper[4746]: E1211 10:12:22.467330 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 11 10:12:22 crc kubenswrapper[4746]: E1211 10:12:22.468466 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fpxqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-4n4rh_openstack(4080f404-aff6-42e2-856c-5b347b908963): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:12:22 crc kubenswrapper[4746]: E1211 10:12:22.469712 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-4n4rh" podUID="4080f404-aff6-42e2-856c-5b347b908963" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.548322 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.555507 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.572813 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.644470 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/857a94b0-85c3-458e-b0f9-560e834bedda-horizon-secret-key\") pod \"857a94b0-85c3-458e-b0f9-560e834bedda\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.644587 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-scripts\") pod \"857a94b0-85c3-458e-b0f9-560e834bedda\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.644685 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-config-data\") pod \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.644775 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857a94b0-85c3-458e-b0f9-560e834bedda-logs\") pod \"857a94b0-85c3-458e-b0f9-560e834bedda\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.644866 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hbkng\" (UniqueName: \"kubernetes.io/projected/857a94b0-85c3-458e-b0f9-560e834bedda-kube-api-access-hbkng\") pod \"857a94b0-85c3-458e-b0f9-560e834bedda\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.644983 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-config-data\") pod \"857a94b0-85c3-458e-b0f9-560e834bedda\" (UID: \"857a94b0-85c3-458e-b0f9-560e834bedda\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.645263 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b817e797-6fd0-4b06-94a5-32ade5baf2f5-horizon-secret-key\") pod \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.645405 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b817e797-6fd0-4b06-94a5-32ade5baf2f5-logs\") pod \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.645506 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-scripts\") pod \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.645535 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st48x\" (UniqueName: \"kubernetes.io/projected/b817e797-6fd0-4b06-94a5-32ade5baf2f5-kube-api-access-st48x\") pod \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\" (UID: \"b817e797-6fd0-4b06-94a5-32ade5baf2f5\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.645713 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/857a94b0-85c3-458e-b0f9-560e834bedda-logs" (OuterVolumeSpecName: "logs") pod "857a94b0-85c3-458e-b0f9-560e834bedda" (UID: "857a94b0-85c3-458e-b0f9-560e834bedda"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.645904 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b817e797-6fd0-4b06-94a5-32ade5baf2f5-logs" (OuterVolumeSpecName: "logs") pod "b817e797-6fd0-4b06-94a5-32ade5baf2f5" (UID: "b817e797-6fd0-4b06-94a5-32ade5baf2f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.646147 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-scripts" (OuterVolumeSpecName: "scripts") pod "b817e797-6fd0-4b06-94a5-32ade5baf2f5" (UID: "b817e797-6fd0-4b06-94a5-32ade5baf2f5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.646294 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b817e797-6fd0-4b06-94a5-32ade5baf2f5-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.646317 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.646330 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/857a94b0-85c3-458e-b0f9-560e834bedda-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.646340 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-scripts" (OuterVolumeSpecName: "scripts") pod "857a94b0-85c3-458e-b0f9-560e834bedda" (UID: "857a94b0-85c3-458e-b0f9-560e834bedda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.646418 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-config-data" (OuterVolumeSpecName: "config-data") pod "b817e797-6fd0-4b06-94a5-32ade5baf2f5" (UID: "b817e797-6fd0-4b06-94a5-32ade5baf2f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.646419 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-config-data" (OuterVolumeSpecName: "config-data") pod "857a94b0-85c3-458e-b0f9-560e834bedda" (UID: "857a94b0-85c3-458e-b0f9-560e834bedda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.650613 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b817e797-6fd0-4b06-94a5-32ade5baf2f5-kube-api-access-st48x" (OuterVolumeSpecName: "kube-api-access-st48x") pod "b817e797-6fd0-4b06-94a5-32ade5baf2f5" (UID: "b817e797-6fd0-4b06-94a5-32ade5baf2f5"). InnerVolumeSpecName "kube-api-access-st48x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.651627 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/857a94b0-85c3-458e-b0f9-560e834bedda-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "857a94b0-85c3-458e-b0f9-560e834bedda" (UID: "857a94b0-85c3-458e-b0f9-560e834bedda"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.656005 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/857a94b0-85c3-458e-b0f9-560e834bedda-kube-api-access-hbkng" (OuterVolumeSpecName: "kube-api-access-hbkng") pod "857a94b0-85c3-458e-b0f9-560e834bedda" (UID: "857a94b0-85c3-458e-b0f9-560e834bedda"). InnerVolumeSpecName "kube-api-access-hbkng". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.675191 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b817e797-6fd0-4b06-94a5-32ade5baf2f5-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b817e797-6fd0-4b06-94a5-32ade5baf2f5" (UID: "b817e797-6fd0-4b06-94a5-32ade5baf2f5"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.748163 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56188568-7337-4fa9-bcc5-25e02aa0366a-logs\") pod \"56188568-7337-4fa9-bcc5-25e02aa0366a\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.748259 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-config-data\") pod \"56188568-7337-4fa9-bcc5-25e02aa0366a\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.748300 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-scripts\") pod \"56188568-7337-4fa9-bcc5-25e02aa0366a\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.748343 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b567r\" (UniqueName: \"kubernetes.io/projected/56188568-7337-4fa9-bcc5-25e02aa0366a-kube-api-access-b567r\") pod \"56188568-7337-4fa9-bcc5-25e02aa0366a\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.748557 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56188568-7337-4fa9-bcc5-25e02aa0366a-horizon-secret-key\") pod \"56188568-7337-4fa9-bcc5-25e02aa0366a\" (UID: \"56188568-7337-4fa9-bcc5-25e02aa0366a\") " Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.749505 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56188568-7337-4fa9-bcc5-25e02aa0366a-logs" (OuterVolumeSpecName: "logs") pod "56188568-7337-4fa9-bcc5-25e02aa0366a" (UID: "56188568-7337-4fa9-bcc5-25e02aa0366a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.749643 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-scripts" (OuterVolumeSpecName: "scripts") pod "56188568-7337-4fa9-bcc5-25e02aa0366a" (UID: "56188568-7337-4fa9-bcc5-25e02aa0366a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.750156 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-config-data" (OuterVolumeSpecName: "config-data") pod "56188568-7337-4fa9-bcc5-25e02aa0366a" (UID: "56188568-7337-4fa9-bcc5-25e02aa0366a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.750283 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st48x\" (UniqueName: \"kubernetes.io/projected/b817e797-6fd0-4b06-94a5-32ade5baf2f5-kube-api-access-st48x\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.750317 4746 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/857a94b0-85c3-458e-b0f9-560e834bedda-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.750339 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.750356 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/56188568-7337-4fa9-bcc5-25e02aa0366a-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.750368 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.750383 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b817e797-6fd0-4b06-94a5-32ade5baf2f5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.750400 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hbkng\" (UniqueName: \"kubernetes.io/projected/857a94b0-85c3-458e-b0f9-560e834bedda-kube-api-access-hbkng\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.750412 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/857a94b0-85c3-458e-b0f9-560e834bedda-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.750423 4746 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b817e797-6fd0-4b06-94a5-32ade5baf2f5-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.751935 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56188568-7337-4fa9-bcc5-25e02aa0366a-kube-api-access-b567r" (OuterVolumeSpecName: "kube-api-access-b567r") pod "56188568-7337-4fa9-bcc5-25e02aa0366a" (UID: "56188568-7337-4fa9-bcc5-25e02aa0366a"). InnerVolumeSpecName "kube-api-access-b567r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.755189 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56188568-7337-4fa9-bcc5-25e02aa0366a-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "56188568-7337-4fa9-bcc5-25e02aa0366a" (UID: "56188568-7337-4fa9-bcc5-25e02aa0366a"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.792677 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-699dbbd6bf-zhtzw" event={"ID":"857a94b0-85c3-458e-b0f9-560e834bedda","Type":"ContainerDied","Data":"38bd66174b3faea1ae44ada2bb5f278461bc70c42116752d748d1c42495a131f"} Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.792744 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-699dbbd6bf-zhtzw" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.794989 4746 generic.go:334] "Generic (PLEG): container finished" podID="4daad880-1f8c-4f37-b718-b8b9eb88d0f3" containerID="d539efced0f07c5cf02199c4e927d7a28b81ada0b57c67dc11a5d0678f447880" exitCode=0 Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.795090 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j82k7" event={"ID":"4daad880-1f8c-4f37-b718-b8b9eb88d0f3","Type":"ContainerDied","Data":"d539efced0f07c5cf02199c4e927d7a28b81ada0b57c67dc11a5d0678f447880"} Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.796488 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-856d48c4bf-f4txn" event={"ID":"b817e797-6fd0-4b06-94a5-32ade5baf2f5","Type":"ContainerDied","Data":"dc4538d3fe68f45d8ccd0468d202d70d6145f75ec5ba453223b102578f38f578"} Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.796519 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-856d48c4bf-f4txn" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.805763 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5794b6ff69-gcvdv" event={"ID":"56188568-7337-4fa9-bcc5-25e02aa0366a","Type":"ContainerDied","Data":"da0c3fff85fd139362a2b22fe1b2f2d8c36bea191e1a6140b182b8e35c336b8b"} Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.805782 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5794b6ff69-gcvdv" Dec 11 10:12:22 crc kubenswrapper[4746]: E1211 10:12:22.809361 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-4n4rh" podUID="4080f404-aff6-42e2-856c-5b347b908963" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.856714 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/56188568-7337-4fa9-bcc5-25e02aa0366a-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.856763 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b567r\" (UniqueName: \"kubernetes.io/projected/56188568-7337-4fa9-bcc5-25e02aa0366a-kube-api-access-b567r\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.856779 4746 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/56188568-7337-4fa9-bcc5-25e02aa0366a-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.905205 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5794b6ff69-gcvdv"] Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.927350 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5794b6ff69-gcvdv"] Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.954420 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-699dbbd6bf-zhtzw"] Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.972414 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-699dbbd6bf-zhtzw"] Dec 11 10:12:22 crc kubenswrapper[4746]: I1211 10:12:22.999574 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-856d48c4bf-f4txn"] Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.008728 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-856d48c4bf-f4txn"] Dec 11 10:12:23 crc kubenswrapper[4746]: E1211 10:12:23.082961 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 11 10:12:23 crc kubenswrapper[4746]: E1211 10:12:23.083172 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n589h67h5h67fh559h5fh88h585h55hddhbfh658h75h7dh5dh5d9h6dh7bh5cch5d5hd6h567hbdh668h88h79h54bhc4h79h95h587h68q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5ck4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.084032 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.092979 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fh8cg" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.265418 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-config\") pod \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.265995 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-combined-ca-bundle\") pod \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.266148 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-config-data\") pod \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.266295 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf2st\" (UniqueName: \"kubernetes.io/projected/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-kube-api-access-nf2st\") pod \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.266354 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-nb\") pod \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.266438 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-sb\") pod \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.266499 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-dns-svc\") pod \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.266561 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp9d4\" (UniqueName: \"kubernetes.io/projected/3ee5c96f-8699-41e0-9318-4a5ad8af233d-kube-api-access-pp9d4\") pod \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\" (UID: \"3ee5c96f-8699-41e0-9318-4a5ad8af233d\") " Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.266603 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-db-sync-config-data\") pod \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\" (UID: \"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983\") " Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.274107 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-kube-api-access-nf2st" (OuterVolumeSpecName: "kube-api-access-nf2st") pod "bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" (UID: "bce3e7bb-063f-4e71-b1d7-e3a14e1a9983"). InnerVolumeSpecName "kube-api-access-nf2st". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.286356 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee5c96f-8699-41e0-9318-4a5ad8af233d-kube-api-access-pp9d4" (OuterVolumeSpecName: "kube-api-access-pp9d4") pod "3ee5c96f-8699-41e0-9318-4a5ad8af233d" (UID: "3ee5c96f-8699-41e0-9318-4a5ad8af233d"). InnerVolumeSpecName "kube-api-access-pp9d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.292413 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" (UID: "bce3e7bb-063f-4e71-b1d7-e3a14e1a9983"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.371467 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" (UID: "bce3e7bb-063f-4e71-b1d7-e3a14e1a9983"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.372718 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf2st\" (UniqueName: \"kubernetes.io/projected/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-kube-api-access-nf2st\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.372754 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp9d4\" (UniqueName: \"kubernetes.io/projected/3ee5c96f-8699-41e0-9318-4a5ad8af233d-kube-api-access-pp9d4\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.372801 4746 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.372815 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.398152 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3ee5c96f-8699-41e0-9318-4a5ad8af233d" (UID: "3ee5c96f-8699-41e0-9318-4a5ad8af233d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.398244 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3ee5c96f-8699-41e0-9318-4a5ad8af233d" (UID: "3ee5c96f-8699-41e0-9318-4a5ad8af233d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.398494 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-config" (OuterVolumeSpecName: "config") pod "3ee5c96f-8699-41e0-9318-4a5ad8af233d" (UID: "3ee5c96f-8699-41e0-9318-4a5ad8af233d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.406968 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-config-data" (OuterVolumeSpecName: "config-data") pod "bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" (UID: "bce3e7bb-063f-4e71-b1d7-e3a14e1a9983"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.412758 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3ee5c96f-8699-41e0-9318-4a5ad8af233d" (UID: "3ee5c96f-8699-41e0-9318-4a5ad8af233d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.475224 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.475293 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.475309 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.475324 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ee5c96f-8699-41e0-9318-4a5ad8af233d-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.475337 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.627871 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-77c4bd4944-jbg88"] Dec 11 10:12:23 crc kubenswrapper[4746]: W1211 10:12:23.642383 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5fc9dd4_aca9_44c7_b8d4_cbbc19f05e81.slice/crio-3a07c7c6681502aca451823cfa681510be2556425a8619956c4ddb9642956fd0 WatchSource:0}: Error finding container 3a07c7c6681502aca451823cfa681510be2556425a8619956c4ddb9642956fd0: Status 404 returned error can't find the container with id 3a07c7c6681502aca451823cfa681510be2556425a8619956c4ddb9642956fd0 Dec 11 10:12:23 crc kubenswrapper[4746]: W1211 10:12:23.647218 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod016a98db_33b2_4acb_a360_5e8a55aebd6c.slice/crio-2f8739faf813bee05e2a0db5437afeff3300e2b8a6f2aa2eb36c5b117d2ccf4d WatchSource:0}: Error finding container 2f8739faf813bee05e2a0db5437afeff3300e2b8a6f2aa2eb36c5b117d2ccf4d: Status 404 returned error can't find the container with id 2f8739faf813bee05e2a0db5437afeff3300e2b8a6f2aa2eb36c5b117d2ccf4d Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.658599 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56188568-7337-4fa9-bcc5-25e02aa0366a" path="/var/lib/kubelet/pods/56188568-7337-4fa9-bcc5-25e02aa0366a/volumes" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.659222 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="857a94b0-85c3-458e-b0f9-560e834bedda" path="/var/lib/kubelet/pods/857a94b0-85c3-458e-b0f9-560e834bedda/volumes" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.659816 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b817e797-6fd0-4b06-94a5-32ade5baf2f5" path="/var/lib/kubelet/pods/b817e797-6fd0-4b06-94a5-32ade5baf2f5/volumes" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.660769 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-b7f654f86-sh94c"] Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.738918 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-nxg7v"] Dec 11 10:12:23 crc kubenswrapper[4746]: W1211 10:12:23.745369 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddbcfe442_59c5_4c0e_b051_9ea04f8127b3.slice/crio-3bf5d5d184150e9575882c3bc0a84f9bad403432d1e4d4035ab2c1d4c143dda3 WatchSource:0}: Error finding container 3bf5d5d184150e9575882c3bc0a84f9bad403432d1e4d4035ab2c1d4c143dda3: Status 404 returned error can't find the container with id 3bf5d5d184150e9575882c3bc0a84f9bad403432d1e4d4035ab2c1d4c143dda3 Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.874098 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" event={"ID":"3ee5c96f-8699-41e0-9318-4a5ad8af233d","Type":"ContainerDied","Data":"75b8e6889e5a451dc842f80f1587328823d91971a37a0031db509404796cfd9d"} Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.874715 4746 scope.go:117] "RemoveContainer" containerID="906f65b562849c513e3290bbefd48980bea8ec54dc852a875a265e6f1ceeade8" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.874980 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.879606 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m9x6s" event={"ID":"9962918f-3f76-42ae-b292-0c2300106516","Type":"ContainerStarted","Data":"70cf642fe694e74c9df879731c530d62d8f3366c2fdf07e899d4eb6446bc8cbb"} Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.881804 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c4bd4944-jbg88" event={"ID":"016a98db-33b2-4acb-a360-5e8a55aebd6c","Type":"ContainerStarted","Data":"2f8739faf813bee05e2a0db5437afeff3300e2b8a6f2aa2eb36c5b117d2ccf4d"} Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.884673 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-fh8cg" event={"ID":"bce3e7bb-063f-4e71-b1d7-e3a14e1a9983","Type":"ContainerDied","Data":"857dfbe1b5815b24420fe24f82299c184b6fc3f6c7a37bf9f7c563bfd720de59"} Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.884712 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="857dfbe1b5815b24420fe24f82299c184b6fc3f6c7a37bf9f7c563bfd720de59" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.884792 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-fh8cg" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.886497 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nxg7v" event={"ID":"dbcfe442-59c5-4c0e-b051-9ea04f8127b3","Type":"ContainerStarted","Data":"3bf5d5d184150e9575882c3bc0a84f9bad403432d1e4d4035ab2c1d4c143dda3"} Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.888902 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b7f654f86-sh94c" event={"ID":"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81","Type":"ContainerStarted","Data":"3a07c7c6681502aca451823cfa681510be2556425a8619956c4ddb9642956fd0"} Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.905423 4746 scope.go:117] "RemoveContainer" containerID="5c72510cfc8afdcad0e1236396d77ea53bbd36f8a53211f72a7c63ef79ec23fc" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.905732 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-m9x6s" podStartSLOduration=4.398327207 podStartE2EDuration="37.905712686s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="2025-12-11 10:11:48.918880312 +0000 UTC m=+1081.778743625" lastFinishedPulling="2025-12-11 10:12:22.426265771 +0000 UTC m=+1115.286129104" observedRunningTime="2025-12-11 10:12:23.897867845 +0000 UTC m=+1116.757731168" watchObservedRunningTime="2025-12-11 10:12:23.905712686 +0000 UTC m=+1116.765575999" Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.922467 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ckzx7"] Dec 11 10:12:23 crc kubenswrapper[4746]: I1211 10:12:23.929580 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-ckzx7"] Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.237029 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j82k7" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.394706 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-combined-ca-bundle\") pod \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.394774 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5rg8\" (UniqueName: \"kubernetes.io/projected/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-kube-api-access-k5rg8\") pod \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.394839 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-config\") pod \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\" (UID: \"4daad880-1f8c-4f37-b718-b8b9eb88d0f3\") " Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.401370 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-kube-api-access-k5rg8" (OuterVolumeSpecName: "kube-api-access-k5rg8") pod "4daad880-1f8c-4f37-b718-b8b9eb88d0f3" (UID: "4daad880-1f8c-4f37-b718-b8b9eb88d0f3"). InnerVolumeSpecName "kube-api-access-k5rg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.437611 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4daad880-1f8c-4f37-b718-b8b9eb88d0f3" (UID: "4daad880-1f8c-4f37-b718-b8b9eb88d0f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.447942 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-config" (OuterVolumeSpecName: "config") pod "4daad880-1f8c-4f37-b718-b8b9eb88d0f3" (UID: "4daad880-1f8c-4f37-b718-b8b9eb88d0f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.497505 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.497535 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5rg8\" (UniqueName: \"kubernetes.io/projected/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-kube-api-access-k5rg8\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.497552 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4daad880-1f8c-4f37-b718-b8b9eb88d0f3-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.622942 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lftcq"] Dec 11 10:12:24 crc kubenswrapper[4746]: E1211 10:12:24.623396 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" containerName="glance-db-sync" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.623423 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" containerName="glance-db-sync" Dec 11 10:12:24 crc kubenswrapper[4746]: E1211 10:12:24.623436 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="init" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.623446 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="init" Dec 11 10:12:24 crc kubenswrapper[4746]: E1211 10:12:24.623466 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="dnsmasq-dns" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.623476 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="dnsmasq-dns" Dec 11 10:12:24 crc kubenswrapper[4746]: E1211 10:12:24.623501 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4daad880-1f8c-4f37-b718-b8b9eb88d0f3" containerName="neutron-db-sync" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.623508 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4daad880-1f8c-4f37-b718-b8b9eb88d0f3" containerName="neutron-db-sync" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.623687 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4daad880-1f8c-4f37-b718-b8b9eb88d0f3" containerName="neutron-db-sync" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.623705 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" containerName="glance-db-sync" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.623721 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="dnsmasq-dns" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.631308 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.638505 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lftcq"] Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.732538 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-ckzx7" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.805603 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.807592 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.807900 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-config\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.807949 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.808182 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc86c\" (UniqueName: \"kubernetes.io/projected/859f4532-560f-4e10-ac15-cf7b466a7a6d-kube-api-access-rc86c\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.808300 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.905638 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-j82k7" event={"ID":"4daad880-1f8c-4f37-b718-b8b9eb88d0f3","Type":"ContainerDied","Data":"8fdc0ffa4e25ae912f13ea47eec9c51fea90c4fc6a938dd581274d2fba01a4d2"} Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.905682 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fdc0ffa4e25ae912f13ea47eec9c51fea90c4fc6a938dd581274d2fba01a4d2" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.905742 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-j82k7" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.909733 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.909843 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.909876 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.909933 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-config\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.909957 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.909990 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rc86c\" (UniqueName: \"kubernetes.io/projected/859f4532-560f-4e10-ac15-cf7b466a7a6d-kube-api-access-rc86c\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.912287 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.912333 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.912715 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.913377 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-config\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.913423 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nxg7v" event={"ID":"dbcfe442-59c5-4c0e-b051-9ea04f8127b3","Type":"ContainerStarted","Data":"eafa096af237c54a93ce9c9f67f35f86f217b42bfc9f590f905ee5ddfc6b61df"} Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.913566 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.955572 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc86c\" (UniqueName: \"kubernetes.io/projected/859f4532-560f-4e10-ac15-cf7b466a7a6d-kube-api-access-rc86c\") pod \"dnsmasq-dns-56df8fb6b7-lftcq\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.962300 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-nxg7v" podStartSLOduration=14.962278155 podStartE2EDuration="14.962278155s" podCreationTimestamp="2025-12-11 10:12:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:12:24.939280488 +0000 UTC m=+1117.799143801" watchObservedRunningTime="2025-12-11 10:12:24.962278155 +0000 UTC m=+1117.822141468" Dec 11 10:12:24 crc kubenswrapper[4746]: I1211 10:12:24.966520 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.102094 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lftcq"] Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.177445 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-vm6ft"] Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.179978 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.206362 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-vm6ft"] Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.244934 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-548544f678-9jrhc"] Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.246663 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.250355 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.250727 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.253910 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.254734 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nstwz" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.275691 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-548544f678-9jrhc"] Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.321468 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-svc\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.321519 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg6ns\" (UniqueName: \"kubernetes.io/projected/b3ee6299-beee-4379-86e4-89b33e6e11d0-kube-api-access-qg6ns\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.321613 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.323199 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.323293 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-config\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.323354 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.424797 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-config\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.424893 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-combined-ca-bundle\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.424928 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.424953 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-ovndb-tls-certs\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.424978 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-httpd-config\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.425019 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-svc\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.425076 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwb4v\" (UniqueName: \"kubernetes.io/projected/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-kube-api-access-nwb4v\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.425104 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg6ns\" (UniqueName: \"kubernetes.io/projected/b3ee6299-beee-4379-86e4-89b33e6e11d0-kube-api-access-qg6ns\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.425212 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.425301 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.425337 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-config\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.426527 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-svc\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.426527 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-config\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.427234 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.427686 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.428278 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.469168 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg6ns\" (UniqueName: \"kubernetes.io/projected/b3ee6299-beee-4379-86e4-89b33e6e11d0-kube-api-access-qg6ns\") pod \"dnsmasq-dns-6b7b667979-vm6ft\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.535391 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-config\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.536102 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-combined-ca-bundle\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.536494 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-ovndb-tls-certs\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.536570 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-httpd-config\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.536983 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.536775 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwb4v\" (UniqueName: \"kubernetes.io/projected/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-kube-api-access-nwb4v\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.555608 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-httpd-config\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.562585 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.567957 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.572280 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-ovndb-tls-certs\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.573162 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.573186 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.575521 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7h7bv" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.576159 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-combined-ca-bundle\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.580876 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.588745 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-config\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.596973 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwb4v\" (UniqueName: \"kubernetes.io/projected/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-kube-api-access-nwb4v\") pod \"neutron-548544f678-9jrhc\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.653952 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ee5c96f-8699-41e0-9318-4a5ad8af233d" path="/var/lib/kubelet/pods/3ee5c96f-8699-41e0-9318-4a5ad8af233d/volumes" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.741979 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-scripts\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.742119 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-logs\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.742197 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.742272 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.742338 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-config-data\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.742355 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.742380 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfqj7\" (UniqueName: \"kubernetes.io/projected/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-kube-api-access-jfqj7\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.819973 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.824235 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.826446 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.838516 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.843729 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-scripts\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.843789 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-logs\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.843844 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.843863 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.843903 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-config-data\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.843918 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.843941 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfqj7\" (UniqueName: \"kubernetes.io/projected/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-kube-api-access-jfqj7\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.845611 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.852178 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-scripts\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.853446 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-config-data\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.854996 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-logs\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.855420 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.859809 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.869457 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfqj7\" (UniqueName: \"kubernetes.io/projected/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-kube-api-access-jfqj7\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.880334 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.887429 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.951159 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.951294 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t5nx\" (UniqueName: \"kubernetes.io/projected/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-kube-api-access-2t5nx\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.951338 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.951386 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.951441 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.951460 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-logs\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:25 crc kubenswrapper[4746]: I1211 10:12:25.951528 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.008566 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.052949 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.052994 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-logs\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.053015 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.053118 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.053166 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t5nx\" (UniqueName: \"kubernetes.io/projected/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-kube-api-access-2t5nx\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.053213 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.053269 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.055037 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-logs\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.059585 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.059839 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.060016 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.084568 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.092062 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t5nx\" (UniqueName: \"kubernetes.io/projected/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-kube-api-access-2t5nx\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.094005 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.232924 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.352345 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lftcq"] Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.456647 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.556382 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-vm6ft"] Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.772851 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-548544f678-9jrhc"] Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.855292 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:12:26 crc kubenswrapper[4746]: I1211 10:12:26.995500 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" event={"ID":"859f4532-560f-4e10-ac15-cf7b466a7a6d","Type":"ContainerStarted","Data":"3f67dbfa990f5d3cd5a0c207efd2c19bf55079e32150d931c79bed0fc14bc82c"} Dec 11 10:12:27 crc kubenswrapper[4746]: I1211 10:12:27.000093 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548544f678-9jrhc" event={"ID":"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101","Type":"ContainerStarted","Data":"5a9811607faabd467b4098b45031716dd077ec3860d8090089b7fc61b127c4bd"} Dec 11 10:12:27 crc kubenswrapper[4746]: I1211 10:12:27.015356 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b7f654f86-sh94c" event={"ID":"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81","Type":"ContainerStarted","Data":"72dd75ecb1a3f40b2c64f453f5f0614fd81da8ad5eecafdfee950f768dc6416b"} Dec 11 10:12:27 crc kubenswrapper[4746]: I1211 10:12:27.026290 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" event={"ID":"b3ee6299-beee-4379-86e4-89b33e6e11d0","Type":"ContainerStarted","Data":"762ebc00f9d6653b01b7294143f5a673c3e519570ce4417716ea684b92ce2b20"} Dec 11 10:12:27 crc kubenswrapper[4746]: I1211 10:12:27.046781 4746 generic.go:334] "Generic (PLEG): container finished" podID="9962918f-3f76-42ae-b292-0c2300106516" containerID="70cf642fe694e74c9df879731c530d62d8f3366c2fdf07e899d4eb6446bc8cbb" exitCode=0 Dec 11 10:12:27 crc kubenswrapper[4746]: I1211 10:12:27.046881 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m9x6s" event={"ID":"9962918f-3f76-42ae-b292-0c2300106516","Type":"ContainerDied","Data":"70cf642fe694e74c9df879731c530d62d8f3366c2fdf07e899d4eb6446bc8cbb"} Dec 11 10:12:27 crc kubenswrapper[4746]: I1211 10:12:27.065033 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79","Type":"ContainerStarted","Data":"be130564269f4726b1cd5b96890cbde2d701ef99a4ac5a303a09a6fd10f2bd53"} Dec 11 10:12:27 crc kubenswrapper[4746]: I1211 10:12:27.102267 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c4bd4944-jbg88" event={"ID":"016a98db-33b2-4acb-a360-5e8a55aebd6c","Type":"ContainerStarted","Data":"98ac9bd0e996c24ef47fc1358f225bf0638d3c2eb7ec7974d3c9abeb6a0901fa"} Dec 11 10:12:27 crc kubenswrapper[4746]: I1211 10:12:27.327009 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:12:27 crc kubenswrapper[4746]: W1211 10:12:27.363970 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2fb4983_ceca_4637_b9c2_c8e27a4b1992.slice/crio-288cbbca2b3b1e7742f3f88d7eac93a4b6a8ef498ec0705e341bd852ba2a4276 WatchSource:0}: Error finding container 288cbbca2b3b1e7742f3f88d7eac93a4b6a8ef498ec0705e341bd852ba2a4276: Status 404 returned error can't find the container with id 288cbbca2b3b1e7742f3f88d7eac93a4b6a8ef498ec0705e341bd852ba2a4276 Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.120501 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-b7f654f86-sh94c" event={"ID":"b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81","Type":"ContainerStarted","Data":"4acb9b97095d603b073ada90213b93779b903bf04a519ce84e5f19a214bc232e"} Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.126921 4746 generic.go:334] "Generic (PLEG): container finished" podID="b3ee6299-beee-4379-86e4-89b33e6e11d0" containerID="cece8af58d55056fa070b9d99636f11b09102e8cf9c5a2e6a166ebbbe61b35ac" exitCode=0 Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.127251 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" event={"ID":"b3ee6299-beee-4379-86e4-89b33e6e11d0","Type":"ContainerDied","Data":"cece8af58d55056fa070b9d99636f11b09102e8cf9c5a2e6a166ebbbe61b35ac"} Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.133259 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4","Type":"ContainerStarted","Data":"47ff7b2405fb4d1019050fab84f181d03568b9a5faf57439d629b2c289f8728e"} Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.149025 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c4bd4944-jbg88" event={"ID":"016a98db-33b2-4acb-a360-5e8a55aebd6c","Type":"ContainerStarted","Data":"453c2a84853f09eb6406ce82bcc4f8dd06a8ac4f033e57a7416674cbed96eb48"} Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.157941 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2fb4983-ceca-4637-b9c2-c8e27a4b1992","Type":"ContainerStarted","Data":"288cbbca2b3b1e7742f3f88d7eac93a4b6a8ef498ec0705e341bd852ba2a4276"} Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.168084 4746 generic.go:334] "Generic (PLEG): container finished" podID="859f4532-560f-4e10-ac15-cf7b466a7a6d" containerID="2f7bf9d4199bfeb370de4b10aabb7e02e8a41d3fbb50fd03430f5fc38f646e56" exitCode=0 Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.168156 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" event={"ID":"859f4532-560f-4e10-ac15-cf7b466a7a6d","Type":"ContainerDied","Data":"2f7bf9d4199bfeb370de4b10aabb7e02e8a41d3fbb50fd03430f5fc38f646e56"} Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.186111 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-b7f654f86-sh94c" podStartSLOduration=30.963545681 podStartE2EDuration="33.186080119s" podCreationTimestamp="2025-12-11 10:11:55 +0000 UTC" firstStartedPulling="2025-12-11 10:12:23.644966574 +0000 UTC m=+1116.504829887" lastFinishedPulling="2025-12-11 10:12:25.867501012 +0000 UTC m=+1118.727364325" observedRunningTime="2025-12-11 10:12:28.148231812 +0000 UTC m=+1121.008095125" watchObservedRunningTime="2025-12-11 10:12:28.186080119 +0000 UTC m=+1121.045943432" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.195458 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548544f678-9jrhc" event={"ID":"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101","Type":"ContainerStarted","Data":"34f9998be0753a53f6efd0fae94810c513ddf757f4f1c98a62b83175766c9f25"} Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.195505 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548544f678-9jrhc" event={"ID":"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101","Type":"ContainerStarted","Data":"eb638379b914e22cfce30de40cb2cf4ebe7022713c56d8faffa7195121e1bd27"} Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.195519 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.256421 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-77c4bd4944-jbg88" podStartSLOduration=31.053833025 podStartE2EDuration="33.256393866s" podCreationTimestamp="2025-12-11 10:11:55 +0000 UTC" firstStartedPulling="2025-12-11 10:12:23.659230048 +0000 UTC m=+1116.519093361" lastFinishedPulling="2025-12-11 10:12:25.861790899 +0000 UTC m=+1118.721654202" observedRunningTime="2025-12-11 10:12:28.241184378 +0000 UTC m=+1121.101047691" watchObservedRunningTime="2025-12-11 10:12:28.256393866 +0000 UTC m=+1121.116257179" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.282459 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-548544f678-9jrhc" podStartSLOduration=3.282427665 podStartE2EDuration="3.282427665s" podCreationTimestamp="2025-12-11 10:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:12:28.280661048 +0000 UTC m=+1121.140524361" watchObservedRunningTime="2025-12-11 10:12:28.282427665 +0000 UTC m=+1121.142290988" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.753802 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.890857 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-swift-storage-0\") pod \"859f4532-560f-4e10-ac15-cf7b466a7a6d\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.891721 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rc86c\" (UniqueName: \"kubernetes.io/projected/859f4532-560f-4e10-ac15-cf7b466a7a6d-kube-api-access-rc86c\") pod \"859f4532-560f-4e10-ac15-cf7b466a7a6d\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.891936 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-config\") pod \"859f4532-560f-4e10-ac15-cf7b466a7a6d\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.892114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-sb\") pod \"859f4532-560f-4e10-ac15-cf7b466a7a6d\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.892670 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-nb\") pod \"859f4532-560f-4e10-ac15-cf7b466a7a6d\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.893243 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-svc\") pod \"859f4532-560f-4e10-ac15-cf7b466a7a6d\" (UID: \"859f4532-560f-4e10-ac15-cf7b466a7a6d\") " Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.905112 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/859f4532-560f-4e10-ac15-cf7b466a7a6d-kube-api-access-rc86c" (OuterVolumeSpecName: "kube-api-access-rc86c") pod "859f4532-560f-4e10-ac15-cf7b466a7a6d" (UID: "859f4532-560f-4e10-ac15-cf7b466a7a6d"). InnerVolumeSpecName "kube-api-access-rc86c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.947865 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "859f4532-560f-4e10-ac15-cf7b466a7a6d" (UID: "859f4532-560f-4e10-ac15-cf7b466a7a6d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.974658 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "859f4532-560f-4e10-ac15-cf7b466a7a6d" (UID: "859f4532-560f-4e10-ac15-cf7b466a7a6d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.974848 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-config" (OuterVolumeSpecName: "config") pod "859f4532-560f-4e10-ac15-cf7b466a7a6d" (UID: "859f4532-560f-4e10-ac15-cf7b466a7a6d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.988546 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "859f4532-560f-4e10-ac15-cf7b466a7a6d" (UID: "859f4532-560f-4e10-ac15-cf7b466a7a6d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.989260 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m9x6s" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.992197 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "859f4532-560f-4e10-ac15-cf7b466a7a6d" (UID: "859f4532-560f-4e10-ac15-cf7b466a7a6d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.995944 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.996000 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rc86c\" (UniqueName: \"kubernetes.io/projected/859f4532-560f-4e10-ac15-cf7b466a7a6d-kube-api-access-rc86c\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.996015 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.996032 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.996058 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:28 crc kubenswrapper[4746]: I1211 10:12:28.996071 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/859f4532-560f-4e10-ac15-cf7b466a7a6d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.097392 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-combined-ca-bundle\") pod \"9962918f-3f76-42ae-b292-0c2300106516\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.097637 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9962918f-3f76-42ae-b292-0c2300106516-logs\") pod \"9962918f-3f76-42ae-b292-0c2300106516\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.097728 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-config-data\") pod \"9962918f-3f76-42ae-b292-0c2300106516\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.097813 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-scripts\") pod \"9962918f-3f76-42ae-b292-0c2300106516\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.097992 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54gq6\" (UniqueName: \"kubernetes.io/projected/9962918f-3f76-42ae-b292-0c2300106516-kube-api-access-54gq6\") pod \"9962918f-3f76-42ae-b292-0c2300106516\" (UID: \"9962918f-3f76-42ae-b292-0c2300106516\") " Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.098324 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9962918f-3f76-42ae-b292-0c2300106516-logs" (OuterVolumeSpecName: "logs") pod "9962918f-3f76-42ae-b292-0c2300106516" (UID: "9962918f-3f76-42ae-b292-0c2300106516"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.099258 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9962918f-3f76-42ae-b292-0c2300106516-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.116987 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9962918f-3f76-42ae-b292-0c2300106516-kube-api-access-54gq6" (OuterVolumeSpecName: "kube-api-access-54gq6") pod "9962918f-3f76-42ae-b292-0c2300106516" (UID: "9962918f-3f76-42ae-b292-0c2300106516"). InnerVolumeSpecName "kube-api-access-54gq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.117302 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-scripts" (OuterVolumeSpecName: "scripts") pod "9962918f-3f76-42ae-b292-0c2300106516" (UID: "9962918f-3f76-42ae-b292-0c2300106516"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.142031 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9962918f-3f76-42ae-b292-0c2300106516" (UID: "9962918f-3f76-42ae-b292-0c2300106516"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.152309 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-config-data" (OuterVolumeSpecName: "config-data") pod "9962918f-3f76-42ae-b292-0c2300106516" (UID: "9962918f-3f76-42ae-b292-0c2300106516"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.209518 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54gq6\" (UniqueName: \"kubernetes.io/projected/9962918f-3f76-42ae-b292-0c2300106516-kube-api-access-54gq6\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.209577 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.209590 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.209604 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9962918f-3f76-42ae-b292-0c2300106516-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.218015 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-bb679f888-dj844"] Dec 11 10:12:29 crc kubenswrapper[4746]: E1211 10:12:29.218534 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9962918f-3f76-42ae-b292-0c2300106516" containerName="placement-db-sync" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.218559 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9962918f-3f76-42ae-b292-0c2300106516" containerName="placement-db-sync" Dec 11 10:12:29 crc kubenswrapper[4746]: E1211 10:12:29.218574 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="859f4532-560f-4e10-ac15-cf7b466a7a6d" containerName="init" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.218580 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="859f4532-560f-4e10-ac15-cf7b466a7a6d" containerName="init" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.218790 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="859f4532-560f-4e10-ac15-cf7b466a7a6d" containerName="init" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.218805 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9962918f-3f76-42ae-b292-0c2300106516" containerName="placement-db-sync" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.220036 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.226575 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.226756 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.236749 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bb679f888-dj844"] Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.257132 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4","Type":"ContainerStarted","Data":"e563d06655b7c8ed26e37a6e7b7c99a01649b5005ee3b4fc5a8ac4e958ff6072"} Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.260984 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2fb4983-ceca-4637-b9c2-c8e27a4b1992","Type":"ContainerStarted","Data":"7789208ad0b6678a2e44d64e382e2006f53f60164a9e4f7766c4d6ce1bc027fe"} Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.264568 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" event={"ID":"859f4532-560f-4e10-ac15-cf7b466a7a6d","Type":"ContainerDied","Data":"3f67dbfa990f5d3cd5a0c207efd2c19bf55079e32150d931c79bed0fc14bc82c"} Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.264608 4746 scope.go:117] "RemoveContainer" containerID="2f7bf9d4199bfeb370de4b10aabb7e02e8a41d3fbb50fd03430f5fc38f646e56" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.264719 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-lftcq" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.290217 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" event={"ID":"b3ee6299-beee-4379-86e4-89b33e6e11d0","Type":"ContainerStarted","Data":"1e76c7607f91642a298a018feb57611053d8ba931362e37d6698a32f1f74e771"} Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.308243 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-m9x6s" event={"ID":"9962918f-3f76-42ae-b292-0c2300106516","Type":"ContainerDied","Data":"8de6d6b55b515ad7afc4ce59200f20fc3ccbed7b4a317de14e5ca14b2b0a5418"} Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.308292 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de6d6b55b515ad7afc4ce59200f20fc3ccbed7b4a317de14e5ca14b2b0a5418" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.308367 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-m9x6s" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.456398 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtvd\" (UniqueName: \"kubernetes.io/projected/621d56dd-8011-4236-a393-6b57891b3f37-kube-api-access-bhtvd\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.456480 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-internal-tls-certs\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.456593 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-config-data\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.456672 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-public-tls-certs\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.456831 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-combined-ca-bundle\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.456939 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-scripts\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.456989 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621d56dd-8011-4236-a393-6b57891b3f37-logs\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.469819 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lftcq"] Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.524697 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-lftcq"] Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.557468 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.558956 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-public-tls-certs\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.559018 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-combined-ca-bundle\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.559083 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-scripts\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.559535 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621d56dd-8011-4236-a393-6b57891b3f37-logs\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.559635 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhtvd\" (UniqueName: \"kubernetes.io/projected/621d56dd-8011-4236-a393-6b57891b3f37-kube-api-access-bhtvd\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.559655 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-internal-tls-certs\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.559685 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-config-data\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.568407 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/621d56dd-8011-4236-a393-6b57891b3f37-logs\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.571181 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-config-data\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.571626 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-scripts\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.573601 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-combined-ca-bundle\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.574882 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-public-tls-certs\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.575922 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/621d56dd-8011-4236-a393-6b57891b3f37-internal-tls-certs\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.586395 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.590151 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhtvd\" (UniqueName: \"kubernetes.io/projected/621d56dd-8011-4236-a393-6b57891b3f37-kube-api-access-bhtvd\") pod \"placement-bb679f888-dj844\" (UID: \"621d56dd-8011-4236-a393-6b57891b3f37\") " pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.649862 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="859f4532-560f-4e10-ac15-cf7b466a7a6d" path="/var/lib/kubelet/pods/859f4532-560f-4e10-ac15-cf7b466a7a6d/volumes" Dec 11 10:12:29 crc kubenswrapper[4746]: I1211 10:12:29.863340 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.331391 4746 generic.go:334] "Generic (PLEG): container finished" podID="dbcfe442-59c5-4c0e-b051-9ea04f8127b3" containerID="eafa096af237c54a93ce9c9f67f35f86f217b42bfc9f590f905ee5ddfc6b61df" exitCode=0 Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.331464 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nxg7v" event={"ID":"dbcfe442-59c5-4c0e-b051-9ea04f8127b3","Type":"ContainerDied","Data":"eafa096af237c54a93ce9c9f67f35f86f217b42bfc9f590f905ee5ddfc6b61df"} Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.445144 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4","Type":"ContainerStarted","Data":"83d22848a29fac68d2bee919a201fcbe876bc6fb2af467548819ad92e55444be"} Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.445547 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" containerName="glance-log" containerID="cri-o://e563d06655b7c8ed26e37a6e7b7c99a01649b5005ee3b4fc5a8ac4e958ff6072" gracePeriod=30 Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.445843 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" containerName="glance-httpd" containerID="cri-o://83d22848a29fac68d2bee919a201fcbe876bc6fb2af467548819ad92e55444be" gracePeriod=30 Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.469585 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2fb4983-ceca-4637-b9c2-c8e27a4b1992","Type":"ContainerStarted","Data":"27229dd93d30bc60595b295be8e4559343b1893e021e53d12d36858d3f315e61"} Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.469770 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" containerName="glance-log" containerID="cri-o://7789208ad0b6678a2e44d64e382e2006f53f60164a9e4f7766c4d6ce1bc027fe" gracePeriod=30 Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.469863 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" containerName="glance-httpd" containerID="cri-o://27229dd93d30bc60595b295be8e4559343b1893e021e53d12d36858d3f315e61" gracePeriod=30 Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.477269 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.512488 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-bb679f888-dj844"] Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.512904 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.512892426 podStartE2EDuration="6.512892426s" podCreationTimestamp="2025-12-11 10:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:12:30.501206512 +0000 UTC m=+1123.361069825" watchObservedRunningTime="2025-12-11 10:12:30.512892426 +0000 UTC m=+1123.372755739" Dec 11 10:12:30 crc kubenswrapper[4746]: W1211 10:12:30.521388 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod621d56dd_8011_4236_a393_6b57891b3f37.slice/crio-de89cbff42cda3d9713ebe20d4c1ffdca192f286c46af21d0175ec721b81519a WatchSource:0}: Error finding container de89cbff42cda3d9713ebe20d4c1ffdca192f286c46af21d0175ec721b81519a: Status 404 returned error can't find the container with id de89cbff42cda3d9713ebe20d4c1ffdca192f286c46af21d0175ec721b81519a Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.553485 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.553448864 podStartE2EDuration="6.553448864s" podCreationTimestamp="2025-12-11 10:12:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:12:30.529121032 +0000 UTC m=+1123.388984355" watchObservedRunningTime="2025-12-11 10:12:30.553448864 +0000 UTC m=+1123.413312187" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.569397 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" podStartSLOduration=5.569380183 podStartE2EDuration="5.569380183s" podCreationTimestamp="2025-12-11 10:12:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:12:30.557891004 +0000 UTC m=+1123.417754337" watchObservedRunningTime="2025-12-11 10:12:30.569380183 +0000 UTC m=+1123.429243496" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.877395 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7754896b7c-5hf99"] Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.879037 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.882764 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.887003 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.892727 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7754896b7c-5hf99"] Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.954329 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwwsj\" (UniqueName: \"kubernetes.io/projected/217bbeb1-db62-4c24-82de-be79c9bad92b-kube-api-access-mwwsj\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.954379 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-combined-ca-bundle\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.954483 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-internal-tls-certs\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.954508 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-httpd-config\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.954529 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-ovndb-tls-certs\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.954559 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-config\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:30 crc kubenswrapper[4746]: I1211 10:12:30.954594 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-public-tls-certs\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.056403 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwwsj\" (UniqueName: \"kubernetes.io/projected/217bbeb1-db62-4c24-82de-be79c9bad92b-kube-api-access-mwwsj\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.056460 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-combined-ca-bundle\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.056518 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-internal-tls-certs\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.056539 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-httpd-config\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.056559 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-ovndb-tls-certs\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.056577 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-config\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.056618 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-public-tls-certs\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.062467 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-ovndb-tls-certs\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.064769 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-combined-ca-bundle\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.065293 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-internal-tls-certs\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.065351 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-httpd-config\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.067420 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-config\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.068632 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/217bbeb1-db62-4c24-82de-be79c9bad92b-public-tls-certs\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.074841 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwwsj\" (UniqueName: \"kubernetes.io/projected/217bbeb1-db62-4c24-82de-be79c9bad92b-kube-api-access-mwwsj\") pod \"neutron-7754896b7c-5hf99\" (UID: \"217bbeb1-db62-4c24-82de-be79c9bad92b\") " pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.198844 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.510490 4746 generic.go:334] "Generic (PLEG): container finished" podID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" containerID="83d22848a29fac68d2bee919a201fcbe876bc6fb2af467548819ad92e55444be" exitCode=0 Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.510909 4746 generic.go:334] "Generic (PLEG): container finished" podID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" containerID="e563d06655b7c8ed26e37a6e7b7c99a01649b5005ee3b4fc5a8ac4e958ff6072" exitCode=143 Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.510981 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4","Type":"ContainerDied","Data":"83d22848a29fac68d2bee919a201fcbe876bc6fb2af467548819ad92e55444be"} Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.511013 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4","Type":"ContainerDied","Data":"e563d06655b7c8ed26e37a6e7b7c99a01649b5005ee3b4fc5a8ac4e958ff6072"} Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.527808 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bb679f888-dj844" event={"ID":"621d56dd-8011-4236-a393-6b57891b3f37","Type":"ContainerStarted","Data":"de89cbff42cda3d9713ebe20d4c1ffdca192f286c46af21d0175ec721b81519a"} Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.531377 4746 generic.go:334] "Generic (PLEG): container finished" podID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" containerID="27229dd93d30bc60595b295be8e4559343b1893e021e53d12d36858d3f315e61" exitCode=0 Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.531420 4746 generic.go:334] "Generic (PLEG): container finished" podID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" containerID="7789208ad0b6678a2e44d64e382e2006f53f60164a9e4f7766c4d6ce1bc027fe" exitCode=143 Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.531548 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2fb4983-ceca-4637-b9c2-c8e27a4b1992","Type":"ContainerDied","Data":"27229dd93d30bc60595b295be8e4559343b1893e021e53d12d36858d3f315e61"} Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.531635 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2fb4983-ceca-4637-b9c2-c8e27a4b1992","Type":"ContainerDied","Data":"7789208ad0b6678a2e44d64e382e2006f53f60164a9e4f7766c4d6ce1bc027fe"} Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.949003 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:31 crc kubenswrapper[4746]: I1211 10:12:31.968916 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7754896b7c-5hf99"] Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.093099 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-fernet-keys\") pod \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.093163 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqsrh\" (UniqueName: \"kubernetes.io/projected/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-kube-api-access-qqsrh\") pod \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.093232 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-combined-ca-bundle\") pod \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.093262 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-config-data\") pod \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.093299 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-scripts\") pod \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.093345 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-credential-keys\") pod \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\" (UID: \"dbcfe442-59c5-4c0e-b051-9ea04f8127b3\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.105203 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-scripts" (OuterVolumeSpecName: "scripts") pod "dbcfe442-59c5-4c0e-b051-9ea04f8127b3" (UID: "dbcfe442-59c5-4c0e-b051-9ea04f8127b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.108873 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dbcfe442-59c5-4c0e-b051-9ea04f8127b3" (UID: "dbcfe442-59c5-4c0e-b051-9ea04f8127b3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.111692 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-kube-api-access-qqsrh" (OuterVolumeSpecName: "kube-api-access-qqsrh") pod "dbcfe442-59c5-4c0e-b051-9ea04f8127b3" (UID: "dbcfe442-59c5-4c0e-b051-9ea04f8127b3"). InnerVolumeSpecName "kube-api-access-qqsrh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.126589 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dbcfe442-59c5-4c0e-b051-9ea04f8127b3" (UID: "dbcfe442-59c5-4c0e-b051-9ea04f8127b3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.137237 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-config-data" (OuterVolumeSpecName: "config-data") pod "dbcfe442-59c5-4c0e-b051-9ea04f8127b3" (UID: "dbcfe442-59c5-4c0e-b051-9ea04f8127b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.158267 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dbcfe442-59c5-4c0e-b051-9ea04f8127b3" (UID: "dbcfe442-59c5-4c0e-b051-9ea04f8127b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.195172 4746 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.195214 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqsrh\" (UniqueName: \"kubernetes.io/projected/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-kube-api-access-qqsrh\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.195228 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.195239 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.195248 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.195256 4746 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dbcfe442-59c5-4c0e-b051-9ea04f8127b3-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.262540 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.399817 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-logs\") pod \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.400321 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-httpd-run\") pod \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.400576 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-logs" (OuterVolumeSpecName: "logs") pod "c2fb4983-ceca-4637-b9c2-c8e27a4b1992" (UID: "c2fb4983-ceca-4637-b9c2-c8e27a4b1992"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.405398 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c2fb4983-ceca-4637-b9c2-c8e27a4b1992" (UID: "c2fb4983-ceca-4637-b9c2-c8e27a4b1992"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.405484 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t5nx\" (UniqueName: \"kubernetes.io/projected/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-kube-api-access-2t5nx\") pod \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.405537 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-scripts\") pod \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.405584 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-combined-ca-bundle\") pod \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.405639 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.405687 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-config-data\") pod \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\" (UID: \"c2fb4983-ceca-4637-b9c2-c8e27a4b1992\") " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.407458 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.407479 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.410563 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-kube-api-access-2t5nx" (OuterVolumeSpecName: "kube-api-access-2t5nx") pod "c2fb4983-ceca-4637-b9c2-c8e27a4b1992" (UID: "c2fb4983-ceca-4637-b9c2-c8e27a4b1992"). InnerVolumeSpecName "kube-api-access-2t5nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.410762 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "c2fb4983-ceca-4637-b9c2-c8e27a4b1992" (UID: "c2fb4983-ceca-4637-b9c2-c8e27a4b1992"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.417787 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-scripts" (OuterVolumeSpecName: "scripts") pod "c2fb4983-ceca-4637-b9c2-c8e27a4b1992" (UID: "c2fb4983-ceca-4637-b9c2-c8e27a4b1992"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.445356 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2fb4983-ceca-4637-b9c2-c8e27a4b1992" (UID: "c2fb4983-ceca-4637-b9c2-c8e27a4b1992"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.484130 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-config-data" (OuterVolumeSpecName: "config-data") pod "c2fb4983-ceca-4637-b9c2-c8e27a4b1992" (UID: "c2fb4983-ceca-4637-b9c2-c8e27a4b1992"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.511158 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.511219 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.511270 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.511282 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.511292 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t5nx\" (UniqueName: \"kubernetes.io/projected/c2fb4983-ceca-4637-b9c2-c8e27a4b1992-kube-api-access-2t5nx\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.556693 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bb679f888-dj844" event={"ID":"621d56dd-8011-4236-a393-6b57891b3f37","Type":"ContainerStarted","Data":"c186fa522233efbb252e24fff07f3e9b4096e478ce6fd72e3d7233b11e4d5b8b"} Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.560565 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.576149 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c2fb4983-ceca-4637-b9c2-c8e27a4b1992","Type":"ContainerDied","Data":"288cbbca2b3b1e7742f3f88d7eac93a4b6a8ef498ec0705e341bd852ba2a4276"} Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.576215 4746 scope.go:117] "RemoveContainer" containerID="27229dd93d30bc60595b295be8e4559343b1893e021e53d12d36858d3f315e61" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.576381 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.613566 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.623333 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-65c49f59b-9mqvh"] Dec 11 10:12:32 crc kubenswrapper[4746]: E1211 10:12:32.624023 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbcfe442-59c5-4c0e-b051-9ea04f8127b3" containerName="keystone-bootstrap" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.624066 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbcfe442-59c5-4c0e-b051-9ea04f8127b3" containerName="keystone-bootstrap" Dec 11 10:12:32 crc kubenswrapper[4746]: E1211 10:12:32.624092 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" containerName="glance-log" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.624100 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" containerName="glance-log" Dec 11 10:12:32 crc kubenswrapper[4746]: E1211 10:12:32.624119 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" containerName="glance-httpd" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.624128 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" containerName="glance-httpd" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.624340 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" containerName="glance-log" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.624365 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" containerName="glance-httpd" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.624381 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbcfe442-59c5-4c0e-b051-9ea04f8127b3" containerName="keystone-bootstrap" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.625363 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.630872 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.631159 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.652145 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-nxg7v" event={"ID":"dbcfe442-59c5-4c0e-b051-9ea04f8127b3","Type":"ContainerDied","Data":"3bf5d5d184150e9575882c3bc0a84f9bad403432d1e4d4035ab2c1d4c143dda3"} Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.652187 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf5d5d184150e9575882c3bc0a84f9bad403432d1e4d4035ab2c1d4c143dda3" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.653354 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-nxg7v" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.673204 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7754896b7c-5hf99" event={"ID":"217bbeb1-db62-4c24-82de-be79c9bad92b","Type":"ContainerStarted","Data":"d72987cb935c8d69b5b1a7c545475ecc2a1adcf0d9d1f9ccdd69537d23429588"} Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.709601 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65c49f59b-9mqvh"] Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.743637 4746 scope.go:117] "RemoveContainer" containerID="7789208ad0b6678a2e44d64e382e2006f53f60164a9e4f7766c4d6ce1bc027fe" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.787948 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.800426 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.828394 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncvgx\" (UniqueName: \"kubernetes.io/projected/689e0dd9-7055-4ca2-81b3-c66d9850e166-kube-api-access-ncvgx\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.828446 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-fernet-keys\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.828468 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-config-data\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.828506 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-internal-tls-certs\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.828555 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-public-tls-certs\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.828575 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-credential-keys\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.828632 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-combined-ca-bundle\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.828674 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-scripts\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.830245 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.832488 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.835918 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.835920 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.864193 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.930291 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-public-tls-certs\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.930417 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-credential-keys\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.930480 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-combined-ca-bundle\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.930519 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-scripts\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.930565 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncvgx\" (UniqueName: \"kubernetes.io/projected/689e0dd9-7055-4ca2-81b3-c66d9850e166-kube-api-access-ncvgx\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.930582 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-fernet-keys\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.930601 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-config-data\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.930681 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-internal-tls-certs\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.935096 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-scripts\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.936148 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-fernet-keys\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.937775 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-internal-tls-certs\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.956902 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-credential-keys\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.956902 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-public-tls-certs\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.957551 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-combined-ca-bundle\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.957723 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/689e0dd9-7055-4ca2-81b3-c66d9850e166-config-data\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:32 crc kubenswrapper[4746]: I1211 10:12:32.979803 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncvgx\" (UniqueName: \"kubernetes.io/projected/689e0dd9-7055-4ca2-81b3-c66d9850e166-kube-api-access-ncvgx\") pod \"keystone-65c49f59b-9mqvh\" (UID: \"689e0dd9-7055-4ca2-81b3-c66d9850e166\") " pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.002554 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.037659 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-logs\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.037813 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.037864 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.037889 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.037984 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.038006 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh5vh\" (UniqueName: \"kubernetes.io/projected/53afe096-e47a-472d-a35a-a2da61b39aae-kube-api-access-xh5vh\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.038060 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.038085 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.140222 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.140274 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh5vh\" (UniqueName: \"kubernetes.io/projected/53afe096-e47a-472d-a35a-a2da61b39aae-kube-api-access-xh5vh\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.140308 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.140328 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.140352 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-logs\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.140392 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.140426 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.140450 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.141430 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.142867 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.143282 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-logs\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.145361 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-config-data\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.147612 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.158722 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-scripts\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.159273 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.162568 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh5vh\" (UniqueName: \"kubernetes.io/projected/53afe096-e47a-472d-a35a-a2da61b39aae-kube-api-access-xh5vh\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.187507 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.462155 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.642582 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2fb4983-ceca-4637-b9c2-c8e27a4b1992" path="/var/lib/kubelet/pods/c2fb4983-ceca-4637-b9c2-c8e27a4b1992/volumes" Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.728303 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7754896b7c-5hf99" event={"ID":"217bbeb1-db62-4c24-82de-be79c9bad92b","Type":"ContainerStarted","Data":"70aa6bed8df62bf63978919a58dca22dc78e6ec3cc89fed4d39e43b0eeaaea28"} Dec 11 10:12:33 crc kubenswrapper[4746]: I1211 10:12:33.730403 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-bb679f888-dj844" event={"ID":"621d56dd-8011-4236-a393-6b57891b3f37","Type":"ContainerStarted","Data":"80baac8604b9e087fc7b4cfbd73748e2b8bb67cf26407fea803feebd31305078"} Dec 11 10:12:34 crc kubenswrapper[4746]: I1211 10:12:34.842520 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:34 crc kubenswrapper[4746]: I1211 10:12:34.842653 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-bb679f888-dj844" Dec 11 10:12:34 crc kubenswrapper[4746]: I1211 10:12:34.986908 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-bb679f888-dj844" podStartSLOduration=5.986881148 podStartE2EDuration="5.986881148s" podCreationTimestamp="2025-12-11 10:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:12:34.981867983 +0000 UTC m=+1127.841731296" watchObservedRunningTime="2025-12-11 10:12:34.986881148 +0000 UTC m=+1127.846744461" Dec 11 10:12:35 crc kubenswrapper[4746]: I1211 10:12:35.539682 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:12:35 crc kubenswrapper[4746]: I1211 10:12:35.680392 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-24z6n"] Dec 11 10:12:35 crc kubenswrapper[4746]: I1211 10:12:35.680787 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" podUID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" containerName="dnsmasq-dns" containerID="cri-o://f885efe0e4e9ccfd18d9dfb95dcb0fac3a0da25fadcaebbe269ec15c28383fc1" gracePeriod=10 Dec 11 10:12:36 crc kubenswrapper[4746]: I1211 10:12:36.156678 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:12:36 crc kubenswrapper[4746]: I1211 10:12:36.156764 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:12:36 crc kubenswrapper[4746]: I1211 10:12:36.249136 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:12:36 crc kubenswrapper[4746]: I1211 10:12:36.249212 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:12:37 crc kubenswrapper[4746]: I1211 10:12:37.514542 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" podUID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: connect: connection refused" Dec 11 10:12:38 crc kubenswrapper[4746]: I1211 10:12:38.970613 4746 generic.go:334] "Generic (PLEG): container finished" podID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" containerID="f885efe0e4e9ccfd18d9dfb95dcb0fac3a0da25fadcaebbe269ec15c28383fc1" exitCode=0 Dec 11 10:12:38 crc kubenswrapper[4746]: I1211 10:12:38.970729 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" event={"ID":"adc6dfbf-d8de-4c67-b56c-83127bd0dac5","Type":"ContainerDied","Data":"f885efe0e4e9ccfd18d9dfb95dcb0fac3a0da25fadcaebbe269ec15c28383fc1"} Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.585261 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.658228 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-config-data\") pod \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.658316 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-combined-ca-bundle\") pod \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.658387 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-httpd-run\") pod \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.658431 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-logs\") pod \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.658471 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfqj7\" (UniqueName: \"kubernetes.io/projected/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-kube-api-access-jfqj7\") pod \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.658581 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.658623 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-scripts\") pod \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\" (UID: \"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4\") " Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.660367 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" (UID: "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.664489 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-logs" (OuterVolumeSpecName: "logs") pod "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" (UID: "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.671560 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-kube-api-access-jfqj7" (OuterVolumeSpecName: "kube-api-access-jfqj7") pod "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" (UID: "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4"). InnerVolumeSpecName "kube-api-access-jfqj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.671603 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" (UID: "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.685729 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-scripts" (OuterVolumeSpecName: "scripts") pod "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" (UID: "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.701322 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" (UID: "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.763355 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.903968 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-config-data" (OuterVolumeSpecName: "config-data") pod "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" (UID: "4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.909424 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.909479 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.909499 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.909511 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.909522 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfqj7\" (UniqueName: \"kubernetes.io/projected/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-kube-api-access-jfqj7\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.917538 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.993170 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4","Type":"ContainerDied","Data":"47ff7b2405fb4d1019050fab84f181d03568b9a5faf57439d629b2c289f8728e"} Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.993278 4746 scope.go:117] "RemoveContainer" containerID="83d22848a29fac68d2bee919a201fcbe876bc6fb2af467548819ad92e55444be" Dec 11 10:12:40 crc kubenswrapper[4746]: I1211 10:12:40.993549 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.020561 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.020621 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.054040 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.071551 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.083864 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:12:41 crc kubenswrapper[4746]: E1211 10:12:41.084429 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" containerName="glance-httpd" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.084452 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" containerName="glance-httpd" Dec 11 10:12:41 crc kubenswrapper[4746]: E1211 10:12:41.084472 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" containerName="glance-log" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.084480 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" containerName="glance-log" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.084677 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" containerName="glance-httpd" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.084708 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" containerName="glance-log" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.087163 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.093920 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.095935 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.096293 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.230894 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.231043 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-logs\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.231143 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-scripts\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.231190 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.231295 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777wf\" (UniqueName: \"kubernetes.io/projected/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-kube-api-access-777wf\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.231341 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.231389 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.231726 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-config-data\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.334302 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.334390 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-777wf\" (UniqueName: \"kubernetes.io/projected/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-kube-api-access-777wf\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.334439 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.334481 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.334531 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-config-data\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.334599 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.334620 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-logs\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.334662 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-scripts\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.334699 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.335832 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.336254 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-logs\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.340920 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-scripts\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.341514 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.341985 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-config-data\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.345246 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.358727 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-777wf\" (UniqueName: \"kubernetes.io/projected/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-kube-api-access-777wf\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.381573 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.418349 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:12:41 crc kubenswrapper[4746]: I1211 10:12:41.645561 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4" path="/var/lib/kubelet/pods/4f5063f0-a873-4ad3-8ed6-1aecbbe32cb4/volumes" Dec 11 10:12:42 crc kubenswrapper[4746]: I1211 10:12:42.792139 4746 scope.go:117] "RemoveContainer" containerID="e563d06655b7c8ed26e37a6e7b7c99a01649b5005ee3b4fc5a8ac4e958ff6072" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.041384 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" event={"ID":"adc6dfbf-d8de-4c67-b56c-83127bd0dac5","Type":"ContainerDied","Data":"be8475c50f38a4b6d36ce2722d9da079e4d37a6da763274288a2d22ba2ac0446"} Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.041470 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be8475c50f38a4b6d36ce2722d9da079e4d37a6da763274288a2d22ba2ac0446" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.100349 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.288558 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-svc\") pod \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.289152 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btq5g\" (UniqueName: \"kubernetes.io/projected/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-kube-api-access-btq5g\") pod \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.289189 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-sb\") pod \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.289414 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-config\") pod \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.289465 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-nb\") pod \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.289571 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-swift-storage-0\") pod \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\" (UID: \"adc6dfbf-d8de-4c67-b56c-83127bd0dac5\") " Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.296488 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-kube-api-access-btq5g" (OuterVolumeSpecName: "kube-api-access-btq5g") pod "adc6dfbf-d8de-4c67-b56c-83127bd0dac5" (UID: "adc6dfbf-d8de-4c67-b56c-83127bd0dac5"). InnerVolumeSpecName "kube-api-access-btq5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.398270 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btq5g\" (UniqueName: \"kubernetes.io/projected/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-kube-api-access-btq5g\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.401307 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "adc6dfbf-d8de-4c67-b56c-83127bd0dac5" (UID: "adc6dfbf-d8de-4c67-b56c-83127bd0dac5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.414201 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-config" (OuterVolumeSpecName: "config") pod "adc6dfbf-d8de-4c67-b56c-83127bd0dac5" (UID: "adc6dfbf-d8de-4c67-b56c-83127bd0dac5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.417618 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "adc6dfbf-d8de-4c67-b56c-83127bd0dac5" (UID: "adc6dfbf-d8de-4c67-b56c-83127bd0dac5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.424926 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "adc6dfbf-d8de-4c67-b56c-83127bd0dac5" (UID: "adc6dfbf-d8de-4c67-b56c-83127bd0dac5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.456955 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "adc6dfbf-d8de-4c67-b56c-83127bd0dac5" (UID: "adc6dfbf-d8de-4c67-b56c-83127bd0dac5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.500361 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.500455 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.500482 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.500546 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.500565 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/adc6dfbf-d8de-4c67-b56c-83127bd0dac5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.625908 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65c49f59b-9mqvh"] Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.657612 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:12:43 crc kubenswrapper[4746]: W1211 10:12:43.672398 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53afe096_e47a_472d_a35a_a2da61b39aae.slice/crio-afbd8d88d7a49052d4cf0000b62afaa9fe198e3a7dfef8c307592d6ee2037c2d WatchSource:0}: Error finding container afbd8d88d7a49052d4cf0000b62afaa9fe198e3a7dfef8c307592d6ee2037c2d: Status 404 returned error can't find the container with id afbd8d88d7a49052d4cf0000b62afaa9fe198e3a7dfef8c307592d6ee2037c2d Dec 11 10:12:43 crc kubenswrapper[4746]: I1211 10:12:43.725634 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:12:43 crc kubenswrapper[4746]: W1211 10:12:43.754164 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2c7b1d0_c00f_4c2f_b5cd_ca67ce430fe5.slice/crio-ad52e4da709f649f4f7479bf2a93d47c52b0c34e8fd6940596e343ed6c08ee86 WatchSource:0}: Error finding container ad52e4da709f649f4f7479bf2a93d47c52b0c34e8fd6940596e343ed6c08ee86: Status 404 returned error can't find the container with id ad52e4da709f649f4f7479bf2a93d47c52b0c34e8fd6940596e343ed6c08ee86 Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.092101 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79","Type":"ContainerStarted","Data":"19f9c318257ca4d1a480e995a0556dc6a7d53cb02db05c2ecc3badde42bd3dd5"} Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.095037 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53afe096-e47a-472d-a35a-a2da61b39aae","Type":"ContainerStarted","Data":"afbd8d88d7a49052d4cf0000b62afaa9fe198e3a7dfef8c307592d6ee2037c2d"} Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.099725 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lznjj" event={"ID":"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc","Type":"ContainerStarted","Data":"cd8fbfb6952d9f93570bbee8cf55a8f0b225b52bc83eb395b716d78cffae9157"} Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.106035 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5","Type":"ContainerStarted","Data":"ad52e4da709f649f4f7479bf2a93d47c52b0c34e8fd6940596e343ed6c08ee86"} Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.112091 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65c49f59b-9mqvh" event={"ID":"689e0dd9-7055-4ca2-81b3-c66d9850e166","Type":"ContainerStarted","Data":"cd0f5565248a6723b0324ab67ec9866ede82268889f06ccd04c4d777027bd8ad"} Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.112945 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.115895 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7754896b7c-5hf99" event={"ID":"217bbeb1-db62-4c24-82de-be79c9bad92b","Type":"ContainerStarted","Data":"228d5f8d70e1c6ef50985d0c30baaf4e0709584799d424694704ffe3bec6fd7c"} Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.115934 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.191406 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-lznjj" podStartSLOduration=2.597961311 podStartE2EDuration="57.191386681s" podCreationTimestamp="2025-12-11 10:11:47 +0000 UTC" firstStartedPulling="2025-12-11 10:11:48.743782935 +0000 UTC m=+1081.603646238" lastFinishedPulling="2025-12-11 10:12:43.337208295 +0000 UTC m=+1136.197071608" observedRunningTime="2025-12-11 10:12:44.124577646 +0000 UTC m=+1136.984440979" watchObservedRunningTime="2025-12-11 10:12:44.191386681 +0000 UTC m=+1137.051249994" Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.215716 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7754896b7c-5hf99" podStartSLOduration=14.215688903 podStartE2EDuration="14.215688903s" podCreationTimestamp="2025-12-11 10:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:12:44.163017209 +0000 UTC m=+1137.022880522" watchObservedRunningTime="2025-12-11 10:12:44.215688903 +0000 UTC m=+1137.075552216" Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.222710 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-65c49f59b-9mqvh" podStartSLOduration=12.222691601 podStartE2EDuration="12.222691601s" podCreationTimestamp="2025-12-11 10:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:12:44.188509743 +0000 UTC m=+1137.048373056" watchObservedRunningTime="2025-12-11 10:12:44.222691601 +0000 UTC m=+1137.082554914" Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.254549 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-24z6n"] Dec 11 10:12:44 crc kubenswrapper[4746]: I1211 10:12:44.271672 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cf78879c9-24z6n"] Dec 11 10:12:45 crc kubenswrapper[4746]: I1211 10:12:45.129607 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4n4rh" event={"ID":"4080f404-aff6-42e2-856c-5b347b908963","Type":"ContainerStarted","Data":"989b35036cb1856e1131040cced5f7c6e9934632450908e0e37ed82485ea0cbf"} Dec 11 10:12:45 crc kubenswrapper[4746]: I1211 10:12:45.131875 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65c49f59b-9mqvh" event={"ID":"689e0dd9-7055-4ca2-81b3-c66d9850e166","Type":"ContainerStarted","Data":"f83d5db685f0ca9b9150ab2e7bd0aea8c4bf743c47176951d94194d2d8838d0f"} Dec 11 10:12:45 crc kubenswrapper[4746]: I1211 10:12:45.132264 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:12:45 crc kubenswrapper[4746]: I1211 10:12:45.159635 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-4n4rh" podStartSLOduration=4.069235328 podStartE2EDuration="59.159613218s" podCreationTimestamp="2025-12-11 10:11:46 +0000 UTC" firstStartedPulling="2025-12-11 10:11:48.216901021 +0000 UTC m=+1081.076764334" lastFinishedPulling="2025-12-11 10:12:43.307278911 +0000 UTC m=+1136.167142224" observedRunningTime="2025-12-11 10:12:45.146570288 +0000 UTC m=+1138.006433601" watchObservedRunningTime="2025-12-11 10:12:45.159613218 +0000 UTC m=+1138.019476531" Dec 11 10:12:45 crc kubenswrapper[4746]: I1211 10:12:45.654950 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" path="/var/lib/kubelet/pods/adc6dfbf-d8de-4c67-b56c-83127bd0dac5/volumes" Dec 11 10:12:46 crc kubenswrapper[4746]: I1211 10:12:46.148670 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53afe096-e47a-472d-a35a-a2da61b39aae","Type":"ContainerStarted","Data":"81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69"} Dec 11 10:12:46 crc kubenswrapper[4746]: I1211 10:12:46.153616 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5","Type":"ContainerStarted","Data":"ef73ad5277a8ae25b601d0a226efffd25c2e8af63b5f422d620e4e0ab40faa96"} Dec 11 10:12:46 crc kubenswrapper[4746]: I1211 10:12:46.162313 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-77c4bd4944-jbg88" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 11 10:12:46 crc kubenswrapper[4746]: I1211 10:12:46.251869 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-b7f654f86-sh94c" podUID="b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.146:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.146:8443: connect: connection refused" Dec 11 10:12:47 crc kubenswrapper[4746]: I1211 10:12:47.513714 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-cf78879c9-24z6n" podUID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.143:5353: i/o timeout" Dec 11 10:12:48 crc kubenswrapper[4746]: I1211 10:12:48.183298 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53afe096-e47a-472d-a35a-a2da61b39aae","Type":"ContainerStarted","Data":"f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9"} Dec 11 10:12:48 crc kubenswrapper[4746]: I1211 10:12:48.188469 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5","Type":"ContainerStarted","Data":"34026be8ede3ae84458859791cf2749046221b44264c909559b8e4bde022a7dd"} Dec 11 10:12:48 crc kubenswrapper[4746]: I1211 10:12:48.235407 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=16.235373276 podStartE2EDuration="16.235373276s" podCreationTimestamp="2025-12-11 10:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:12:48.211358541 +0000 UTC m=+1141.071221854" watchObservedRunningTime="2025-12-11 10:12:48.235373276 +0000 UTC m=+1141.095236609" Dec 11 10:12:48 crc kubenswrapper[4746]: I1211 10:12:48.244995 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.244960684 podStartE2EDuration="7.244960684s" podCreationTimestamp="2025-12-11 10:12:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:12:48.235271733 +0000 UTC m=+1141.095135056" watchObservedRunningTime="2025-12-11 10:12:48.244960684 +0000 UTC m=+1141.104823987" Dec 11 10:12:51 crc kubenswrapper[4746]: I1211 10:12:51.419283 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 10:12:51 crc kubenswrapper[4746]: I1211 10:12:51.419933 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 10:12:51 crc kubenswrapper[4746]: I1211 10:12:51.462275 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 10:12:51 crc kubenswrapper[4746]: I1211 10:12:51.476853 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 10:12:52 crc kubenswrapper[4746]: I1211 10:12:52.247193 4746 generic.go:334] "Generic (PLEG): container finished" podID="c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc" containerID="cd8fbfb6952d9f93570bbee8cf55a8f0b225b52bc83eb395b716d78cffae9157" exitCode=0 Dec 11 10:12:52 crc kubenswrapper[4746]: I1211 10:12:52.247287 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lznjj" event={"ID":"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc","Type":"ContainerDied","Data":"cd8fbfb6952d9f93570bbee8cf55a8f0b225b52bc83eb395b716d78cffae9157"} Dec 11 10:12:52 crc kubenswrapper[4746]: I1211 10:12:52.248299 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 10:12:52 crc kubenswrapper[4746]: I1211 10:12:52.248330 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 10:12:53 crc kubenswrapper[4746]: I1211 10:12:53.263963 4746 generic.go:334] "Generic (PLEG): container finished" podID="4080f404-aff6-42e2-856c-5b347b908963" containerID="989b35036cb1856e1131040cced5f7c6e9934632450908e0e37ed82485ea0cbf" exitCode=0 Dec 11 10:12:53 crc kubenswrapper[4746]: I1211 10:12:53.264057 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4n4rh" event={"ID":"4080f404-aff6-42e2-856c-5b347b908963","Type":"ContainerDied","Data":"989b35036cb1856e1131040cced5f7c6e9934632450908e0e37ed82485ea0cbf"} Dec 11 10:12:53 crc kubenswrapper[4746]: I1211 10:12:53.463301 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:53 crc kubenswrapper[4746]: I1211 10:12:53.463979 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:53 crc kubenswrapper[4746]: I1211 10:12:53.541861 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:53 crc kubenswrapper[4746]: I1211 10:12:53.542435 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.297651 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.297715 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.398635 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.499934 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lznjj" Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.661059 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf4hg\" (UniqueName: \"kubernetes.io/projected/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-kube-api-access-wf4hg\") pod \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.661156 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-db-sync-config-data\") pod \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.661401 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-combined-ca-bundle\") pod \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\" (UID: \"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc\") " Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.671928 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-kube-api-access-wf4hg" (OuterVolumeSpecName: "kube-api-access-wf4hg") pod "c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc" (UID: "c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc"). InnerVolumeSpecName "kube-api-access-wf4hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.695278 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc" (UID: "c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.703241 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc" (UID: "c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.764210 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.764245 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf4hg\" (UniqueName: \"kubernetes.io/projected/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-kube-api-access-wf4hg\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:54 crc kubenswrapper[4746]: I1211 10:12:54.764255 4746 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.306695 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-lznjj" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.309360 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-lznjj" event={"ID":"c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc","Type":"ContainerDied","Data":"3ece94411bd8eb0d81dc00e67f76a045ee02399feb076e2dd43be5af7fd08a57"} Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.309518 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ece94411bd8eb0d81dc00e67f76a045ee02399feb076e2dd43be5af7fd08a57" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.629675 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.950168 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-55c6884c59-pb7cb"] Dec 11 10:12:55 crc kubenswrapper[4746]: E1211 10:12:55.951092 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc" containerName="barbican-db-sync" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.951109 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc" containerName="barbican-db-sync" Dec 11 10:12:55 crc kubenswrapper[4746]: E1211 10:12:55.951130 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" containerName="init" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.951136 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" containerName="init" Dec 11 10:12:55 crc kubenswrapper[4746]: E1211 10:12:55.951163 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" containerName="dnsmasq-dns" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.951170 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" containerName="dnsmasq-dns" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.951419 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc" containerName="barbican-db-sync" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.951445 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="adc6dfbf-d8de-4c67-b56c-83127bd0dac5" containerName="dnsmasq-dns" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.952635 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.959468 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.963797 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.977809 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-js5ns" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.977962 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 11 10:12:55 crc kubenswrapper[4746]: I1211 10:12:55.979541 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55c6884c59-pb7cb"] Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.005703 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f2e937-e45d-48e4-be34-f013cb61dc7e-logs\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.005805 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f2e937-e45d-48e4-be34-f013cb61dc7e-config-data-custom\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.005832 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f2e937-e45d-48e4-be34-f013cb61dc7e-combined-ca-bundle\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.005880 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghc7w\" (UniqueName: \"kubernetes.io/projected/17f2e937-e45d-48e4-be34-f013cb61dc7e-kube-api-access-ghc7w\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.005916 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f2e937-e45d-48e4-be34-f013cb61dc7e-config-data\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.012943 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5pgdn"] Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.015163 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.034088 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5d4579cd86-47qwg"] Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.037188 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.051359 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.100484 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5pgdn"] Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109172 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f2e937-e45d-48e4-be34-f013cb61dc7e-config-data\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109220 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109258 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109285 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6346d2a5-4279-407e-981e-423993612a5c-logs\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109318 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6346d2a5-4279-407e-981e-423993612a5c-config-data-custom\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109345 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109363 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6346d2a5-4279-407e-981e-423993612a5c-combined-ca-bundle\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109397 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f2e937-e45d-48e4-be34-f013cb61dc7e-logs\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109418 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109474 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6346d2a5-4279-407e-981e-423993612a5c-config-data\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109493 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f2e937-e45d-48e4-be34-f013cb61dc7e-combined-ca-bundle\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109509 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f2e937-e45d-48e4-be34-f013cb61dc7e-config-data-custom\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109533 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-config\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109553 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnfkf\" (UniqueName: \"kubernetes.io/projected/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-kube-api-access-cnfkf\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109589 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2prv4\" (UniqueName: \"kubernetes.io/projected/6346d2a5-4279-407e-981e-423993612a5c-kube-api-access-2prv4\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.109620 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghc7w\" (UniqueName: \"kubernetes.io/projected/17f2e937-e45d-48e4-be34-f013cb61dc7e-kube-api-access-ghc7w\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.112743 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/17f2e937-e45d-48e4-be34-f013cb61dc7e-logs\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.119236 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/17f2e937-e45d-48e4-be34-f013cb61dc7e-config-data-custom\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.127046 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/17f2e937-e45d-48e4-be34-f013cb61dc7e-config-data\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.139566 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17f2e937-e45d-48e4-be34-f013cb61dc7e-combined-ca-bundle\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.142323 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d4579cd86-47qwg"] Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.154757 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghc7w\" (UniqueName: \"kubernetes.io/projected/17f2e937-e45d-48e4-be34-f013cb61dc7e-kube-api-access-ghc7w\") pod \"barbican-worker-55c6884c59-pb7cb\" (UID: \"17f2e937-e45d-48e4-be34-f013cb61dc7e\") " pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.212733 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.213114 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6346d2a5-4279-407e-981e-423993612a5c-logs\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.213151 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6346d2a5-4279-407e-981e-423993612a5c-config-data-custom\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.213190 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.213213 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6346d2a5-4279-407e-981e-423993612a5c-combined-ca-bundle\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.213258 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.213316 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6346d2a5-4279-407e-981e-423993612a5c-config-data\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.213346 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-config\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.213374 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnfkf\" (UniqueName: \"kubernetes.io/projected/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-kube-api-access-cnfkf\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.213422 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2prv4\" (UniqueName: \"kubernetes.io/projected/6346d2a5-4279-407e-981e-423993612a5c-kube-api-access-2prv4\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.213466 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.217066 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.218304 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.226313 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6346d2a5-4279-407e-981e-423993612a5c-logs\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.227181 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.228075 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-config\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.230088 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.256776 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2prv4\" (UniqueName: \"kubernetes.io/projected/6346d2a5-4279-407e-981e-423993612a5c-kube-api-access-2prv4\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.266851 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnfkf\" (UniqueName: \"kubernetes.io/projected/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-kube-api-access-cnfkf\") pod \"dnsmasq-dns-848cf88cfc-5pgdn\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.282116 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6346d2a5-4279-407e-981e-423993612a5c-combined-ca-bundle\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.282710 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6346d2a5-4279-407e-981e-423993612a5c-config-data\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.292589 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6346d2a5-4279-407e-981e-423993612a5c-config-data-custom\") pod \"barbican-keystone-listener-5d4579cd86-47qwg\" (UID: \"6346d2a5-4279-407e-981e-423993612a5c\") " pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.321027 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-55c6884c59-pb7cb" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.360462 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.363514 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7b5f77766d-fffkr"] Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.382253 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.387532 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.388318 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.390424 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b5f77766d-fffkr"] Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.424620 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.424689 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fkts\" (UniqueName: \"kubernetes.io/projected/360eee01-6461-4d34-b011-888f1f1026ac-kube-api-access-9fkts\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.424752 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data-custom\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.424778 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-combined-ca-bundle\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.424795 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360eee01-6461-4d34-b011-888f1f1026ac-logs\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.482773 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.525749 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpxqw\" (UniqueName: \"kubernetes.io/projected/4080f404-aff6-42e2-856c-5b347b908963-kube-api-access-fpxqw\") pod \"4080f404-aff6-42e2-856c-5b347b908963\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.525833 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-db-sync-config-data\") pod \"4080f404-aff6-42e2-856c-5b347b908963\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.525979 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-combined-ca-bundle\") pod \"4080f404-aff6-42e2-856c-5b347b908963\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.526032 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-config-data\") pod \"4080f404-aff6-42e2-856c-5b347b908963\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.526092 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4080f404-aff6-42e2-856c-5b347b908963-etc-machine-id\") pod \"4080f404-aff6-42e2-856c-5b347b908963\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.526205 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-scripts\") pod \"4080f404-aff6-42e2-856c-5b347b908963\" (UID: \"4080f404-aff6-42e2-856c-5b347b908963\") " Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.526509 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.526544 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fkts\" (UniqueName: \"kubernetes.io/projected/360eee01-6461-4d34-b011-888f1f1026ac-kube-api-access-9fkts\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.526590 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data-custom\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.526609 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360eee01-6461-4d34-b011-888f1f1026ac-logs\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.526627 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-combined-ca-bundle\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.527174 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4080f404-aff6-42e2-856c-5b347b908963-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "4080f404-aff6-42e2-856c-5b347b908963" (UID: "4080f404-aff6-42e2-856c-5b347b908963"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.533104 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360eee01-6461-4d34-b011-888f1f1026ac-logs\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.535668 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-scripts" (OuterVolumeSpecName: "scripts") pod "4080f404-aff6-42e2-856c-5b347b908963" (UID: "4080f404-aff6-42e2-856c-5b347b908963"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.541626 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data-custom\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.544844 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-combined-ca-bundle\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.559273 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4080f404-aff6-42e2-856c-5b347b908963" (UID: "4080f404-aff6-42e2-856c-5b347b908963"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.559307 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4080f404-aff6-42e2-856c-5b347b908963-kube-api-access-fpxqw" (OuterVolumeSpecName: "kube-api-access-fpxqw") pod "4080f404-aff6-42e2-856c-5b347b908963" (UID: "4080f404-aff6-42e2-856c-5b347b908963"). InnerVolumeSpecName "kube-api-access-fpxqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.563162 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.583179 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fkts\" (UniqueName: \"kubernetes.io/projected/360eee01-6461-4d34-b011-888f1f1026ac-kube-api-access-9fkts\") pod \"barbican-api-7b5f77766d-fffkr\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.605269 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4080f404-aff6-42e2-856c-5b347b908963" (UID: "4080f404-aff6-42e2-856c-5b347b908963"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.628785 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.628860 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpxqw\" (UniqueName: \"kubernetes.io/projected/4080f404-aff6-42e2-856c-5b347b908963-kube-api-access-fpxqw\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.628872 4746 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.628882 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.628892 4746 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/4080f404-aff6-42e2-856c-5b347b908963-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.636550 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-config-data" (OuterVolumeSpecName: "config-data") pod "4080f404-aff6-42e2-856c-5b347b908963" (UID: "4080f404-aff6-42e2-856c-5b347b908963"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.730740 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4080f404-aff6-42e2-856c-5b347b908963-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:12:56 crc kubenswrapper[4746]: I1211 10:12:56.734760 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.354000 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-4n4rh" event={"ID":"4080f404-aff6-42e2-856c-5b347b908963","Type":"ContainerDied","Data":"e4e61b2f0935ac932f43b2f65b93da4b6bd9d875cc540f5662f2032f25d21f8f"} Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.354407 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4e61b2f0935ac932f43b2f65b93da4b6bd9d875cc540f5662f2032f25d21f8f" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.354179 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-4n4rh" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.812016 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.812582 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.898898 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.957260 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:12:57 crc kubenswrapper[4746]: E1211 10:12:57.958309 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4080f404-aff6-42e2-856c-5b347b908963" containerName="cinder-db-sync" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.958426 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4080f404-aff6-42e2-856c-5b347b908963" containerName="cinder-db-sync" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.958753 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4080f404-aff6-42e2-856c-5b347b908963" containerName="cinder-db-sync" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.960383 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.963640 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-9lnrp" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.963916 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.964451 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 11 10:12:57 crc kubenswrapper[4746]: I1211 10:12:57.964599 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.003834 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.048204 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5pgdn"] Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.061684 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvc8x\" (UniqueName: \"kubernetes.io/projected/2a804903-a2db-4a38-88e6-53e34f30b44c-kube-api-access-bvc8x\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.061740 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.061772 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a804903-a2db-4a38-88e6-53e34f30b44c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.061805 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.061826 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.061867 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.062014 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxczw"] Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.075540 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.101863 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxczw"] Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.164642 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvc8x\" (UniqueName: \"kubernetes.io/projected/2a804903-a2db-4a38-88e6-53e34f30b44c-kube-api-access-bvc8x\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.164696 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.164722 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a804903-a2db-4a38-88e6-53e34f30b44c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.164774 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.164801 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.164835 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.166651 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a804903-a2db-4a38-88e6-53e34f30b44c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.180648 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.182324 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.185233 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.187489 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.187708 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.187996 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-scripts\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.192617 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.196152 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.212917 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvc8x\" (UniqueName: \"kubernetes.io/projected/2a804903-a2db-4a38-88e6-53e34f30b44c-kube-api-access-bvc8x\") pod \"cinder-scheduler-0\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270223 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-scripts\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270290 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdf5l\" (UniqueName: \"kubernetes.io/projected/ec9180c8-a1f4-49c8-b699-e8be6081edb0-kube-api-access-mdf5l\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270328 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-svc\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270354 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270412 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270434 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-config\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270468 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9180c8-a1f4-49c8-b699-e8be6081edb0-logs\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270485 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270508 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcgth\" (UniqueName: \"kubernetes.io/projected/f1d15816-c937-40de-971e-f13588eed4bc-kube-api-access-wcgth\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270544 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec9180c8-a1f4-49c8-b699-e8be6081edb0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270568 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data-custom\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270594 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.270648 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.301090 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374195 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9180c8-a1f4-49c8-b699-e8be6081edb0-logs\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374247 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374279 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcgth\" (UniqueName: \"kubernetes.io/projected/f1d15816-c937-40de-971e-f13588eed4bc-kube-api-access-wcgth\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374312 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec9180c8-a1f4-49c8-b699-e8be6081edb0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374335 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data-custom\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374364 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374429 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374491 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-scripts\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374517 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdf5l\" (UniqueName: \"kubernetes.io/projected/ec9180c8-a1f4-49c8-b699-e8be6081edb0-kube-api-access-mdf5l\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374542 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-svc\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374571 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374647 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374679 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-config\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.374859 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9180c8-a1f4-49c8-b699-e8be6081edb0-logs\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.375100 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec9180c8-a1f4-49c8-b699-e8be6081edb0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.376970 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.376958 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.377142 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.380191 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-config\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.383463 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data-custom\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.384705 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-scripts\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.384898 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-svc\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.401559 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.404002 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.411954 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcgth\" (UniqueName: \"kubernetes.io/projected/f1d15816-c937-40de-971e-f13588eed4bc-kube-api-access-wcgth\") pod \"dnsmasq-dns-6578955fd5-pxczw\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.419123 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdf5l\" (UniqueName: \"kubernetes.io/projected/ec9180c8-a1f4-49c8-b699-e8be6081edb0-kube-api-access-mdf5l\") pod \"cinder-api-0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " pod="openstack/cinder-api-0" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.427282 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:12:58 crc kubenswrapper[4746]: I1211 10:12:58.611652 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:12:59 crc kubenswrapper[4746]: I1211 10:12:59.709850 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5d4579cd86-47qwg"] Dec 11 10:12:59 crc kubenswrapper[4746]: E1211 10:12:59.766707 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" Dec 11 10:12:59 crc kubenswrapper[4746]: I1211 10:12:59.839447 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:12:59 crc kubenswrapper[4746]: I1211 10:12:59.875244 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.040653 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7b5f77766d-fffkr"] Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.070678 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-55c6884c59-pb7cb"] Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.126611 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.149859 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.374144 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxczw"] Dec 11 10:13:00 crc kubenswrapper[4746]: W1211 10:13:00.379555 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1d15816_c937_40de_971e_f13588eed4bc.slice/crio-fb8c673ab98b92e65842fc70bcade4cdc11f821d82fb603198378199c1ac28fc WatchSource:0}: Error finding container fb8c673ab98b92e65842fc70bcade4cdc11f821d82fb603198378199c1ac28fc: Status 404 returned error can't find the container with id fb8c673ab98b92e65842fc70bcade4cdc11f821d82fb603198378199c1ac28fc Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.397396 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5pgdn"] Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.491903 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a804903-a2db-4a38-88e6-53e34f30b44c","Type":"ContainerStarted","Data":"0a4d1186c285a5aa7c0e55f0e823facc387e6f87159402db462457448a46c436"} Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.504428 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec9180c8-a1f4-49c8-b699-e8be6081edb0","Type":"ContainerStarted","Data":"415d76d5dced14ae4d70f69c9548105cb29437c32674d92d9355e1dd3361da60"} Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.509503 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f77766d-fffkr" event={"ID":"360eee01-6461-4d34-b011-888f1f1026ac","Type":"ContainerStarted","Data":"de7b10a14d924b704eb49f917cef53f0dc1145afbff7a63b61744691cecc0bc1"} Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.514720 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" event={"ID":"f1d15816-c937-40de-971e-f13588eed4bc","Type":"ContainerStarted","Data":"fb8c673ab98b92e65842fc70bcade4cdc11f821d82fb603198378199c1ac28fc"} Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.517562 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" event={"ID":"6346d2a5-4279-407e-981e-423993612a5c","Type":"ContainerStarted","Data":"f742e720ff57e31314db1b7840ab645ed307157c02dde5282ce6aa02bb7842ea"} Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.525557 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79","Type":"ContainerStarted","Data":"aae63e1ffbe9b38b987faa84638c081c5d87fb1e3a8c31ffa4cf5f398789b77b"} Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.525871 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="ceilometer-notification-agent" containerID="cri-o://be130564269f4726b1cd5b96890cbde2d701ef99a4ac5a303a09a6fd10f2bd53" gracePeriod=30 Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.526815 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.526841 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="proxy-httpd" containerID="cri-o://aae63e1ffbe9b38b987faa84638c081c5d87fb1e3a8c31ffa4cf5f398789b77b" gracePeriod=30 Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.526994 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="sg-core" containerID="cri-o://19f9c318257ca4d1a480e995a0556dc6a7d53cb02db05c2ecc3badde42bd3dd5" gracePeriod=30 Dec 11 10:13:00 crc kubenswrapper[4746]: I1211 10:13:00.546586 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55c6884c59-pb7cb" event={"ID":"17f2e937-e45d-48e4-be34-f013cb61dc7e","Type":"ContainerStarted","Data":"5d4eaed733b7b3a40602c1a1d534cb60d3118ebc54ede3c6233ba1fe76b4a4c1"} Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.018892 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.249471 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7754896b7c-5hf99" Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.352695 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-548544f678-9jrhc"] Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.353077 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-548544f678-9jrhc" podUID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" containerName="neutron-api" containerID="cri-o://eb638379b914e22cfce30de40cb2cf4ebe7022713c56d8faffa7195121e1bd27" gracePeriod=30 Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.353703 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-548544f678-9jrhc" podUID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" containerName="neutron-httpd" containerID="cri-o://34f9998be0753a53f6efd0fae94810c513ddf757f4f1c98a62b83175766c9f25" gracePeriod=30 Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.703479 4746 generic.go:334] "Generic (PLEG): container finished" podID="f1d15816-c937-40de-971e-f13588eed4bc" containerID="8bc647c94a8355c0650d9b227730a450b75a93f80f67d8401dacc24cd469c2f9" exitCode=0 Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.717703 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7b5f77766d-fffkr" podStartSLOduration=5.717670041 podStartE2EDuration="5.717670041s" podCreationTimestamp="2025-12-11 10:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:01.686881694 +0000 UTC m=+1154.546745007" watchObservedRunningTime="2025-12-11 10:13:01.717670041 +0000 UTC m=+1154.577533344" Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.719958 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f77766d-fffkr" event={"ID":"360eee01-6461-4d34-b011-888f1f1026ac","Type":"ContainerStarted","Data":"ce0a3a416f1d2ac59317e18b6d063b00f2d2adc7c6f1428893f28e77dac86eed"} Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.720006 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.720018 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f77766d-fffkr" event={"ID":"360eee01-6461-4d34-b011-888f1f1026ac","Type":"ContainerStarted","Data":"b346cef92731e27d2fdf01205d4e416356b77666acb8fae6f8950a703ae5649a"} Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.720079 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" event={"ID":"f1d15816-c937-40de-971e-f13588eed4bc","Type":"ContainerDied","Data":"8bc647c94a8355c0650d9b227730a450b75a93f80f67d8401dacc24cd469c2f9"} Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.720095 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.774576 4746 generic.go:334] "Generic (PLEG): container finished" podID="ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" containerID="247ee579dc8e4a8f61d6a565bd7e132aef093ae53b0d2b9d829bbfab3b320f6f" exitCode=0 Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.774689 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" event={"ID":"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0","Type":"ContainerDied","Data":"247ee579dc8e4a8f61d6a565bd7e132aef093ae53b0d2b9d829bbfab3b320f6f"} Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.774722 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" event={"ID":"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0","Type":"ContainerStarted","Data":"8465d51f9aec57772fd7d0c1fc0887ffe01598f52dfd7ef44938dd28dc1691f2"} Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.846550 4746 generic.go:334] "Generic (PLEG): container finished" podID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerID="aae63e1ffbe9b38b987faa84638c081c5d87fb1e3a8c31ffa4cf5f398789b77b" exitCode=0 Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.846598 4746 generic.go:334] "Generic (PLEG): container finished" podID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerID="19f9c318257ca4d1a480e995a0556dc6a7d53cb02db05c2ecc3badde42bd3dd5" exitCode=2 Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.846669 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79","Type":"ContainerDied","Data":"aae63e1ffbe9b38b987faa84638c081c5d87fb1e3a8c31ffa4cf5f398789b77b"} Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.846708 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79","Type":"ContainerDied","Data":"19f9c318257ca4d1a480e995a0556dc6a7d53cb02db05c2ecc3badde42bd3dd5"} Dec 11 10:13:01 crc kubenswrapper[4746]: I1211 10:13:01.897169 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec9180c8-a1f4-49c8-b699-e8be6081edb0","Type":"ContainerStarted","Data":"657c81e092410dc723272734c2690973035af2198bfaa777cb1ec07ae10b0b95"} Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.084377 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bb679f888-dj844" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.579370 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.645381 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnfkf\" (UniqueName: \"kubernetes.io/projected/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-kube-api-access-cnfkf\") pod \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.645477 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-nb\") pod \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.645513 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-sb\") pod \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.645637 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-svc\") pod \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.645754 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-swift-storage-0\") pod \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.645996 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-config\") pod \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\" (UID: \"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0\") " Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.655452 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-kube-api-access-cnfkf" (OuterVolumeSpecName: "kube-api-access-cnfkf") pod "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" (UID: "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0"). InnerVolumeSpecName "kube-api-access-cnfkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.692245 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-config" (OuterVolumeSpecName: "config") pod "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" (UID: "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.695516 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" (UID: "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.703240 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" (UID: "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.704894 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" (UID: "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.723138 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" (UID: "ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.727776 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-bb679f888-dj844" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.750095 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnfkf\" (UniqueName: \"kubernetes.io/projected/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-kube-api-access-cnfkf\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.750130 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.750139 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.750149 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.750158 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.750167 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.960723 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" event={"ID":"f1d15816-c937-40de-971e-f13588eed4bc","Type":"ContainerStarted","Data":"ab48afa002ac25205c0f2d14bec592360ae2ee2dba1c7424e504da0fb30e96fc"} Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.960815 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.968610 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" event={"ID":"ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0","Type":"ContainerDied","Data":"8465d51f9aec57772fd7d0c1fc0887ffe01598f52dfd7ef44938dd28dc1691f2"} Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.968682 4746 scope.go:117] "RemoveContainer" containerID="247ee579dc8e4a8f61d6a565bd7e132aef093ae53b0d2b9d829bbfab3b320f6f" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.968813 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-5pgdn" Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.977688 4746 generic.go:334] "Generic (PLEG): container finished" podID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerID="be130564269f4726b1cd5b96890cbde2d701ef99a4ac5a303a09a6fd10f2bd53" exitCode=0 Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.977765 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79","Type":"ContainerDied","Data":"be130564269f4726b1cd5b96890cbde2d701ef99a4ac5a303a09a6fd10f2bd53"} Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.981209 4746 generic.go:334] "Generic (PLEG): container finished" podID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" containerID="34f9998be0753a53f6efd0fae94810c513ddf757f4f1c98a62b83175766c9f25" exitCode=0 Dec 11 10:13:02 crc kubenswrapper[4746]: I1211 10:13:02.982246 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548544f678-9jrhc" event={"ID":"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101","Type":"ContainerDied","Data":"34f9998be0753a53f6efd0fae94810c513ddf757f4f1c98a62b83175766c9f25"} Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:02.999219 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" podStartSLOduration=5.999193732 podStartE2EDuration="5.999193732s" podCreationTimestamp="2025-12-11 10:12:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:02.986621755 +0000 UTC m=+1155.846485088" watchObservedRunningTime="2025-12-11 10:13:02.999193732 +0000 UTC m=+1155.859057045" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.200217 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d68478596-8jx82"] Dec 11 10:13:03 crc kubenswrapper[4746]: E1211 10:13:03.203540 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" containerName="init" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.203633 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" containerName="init" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.204191 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" containerName="init" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.216016 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5pgdn"] Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.216534 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.218778 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.219070 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.234209 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-5pgdn"] Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.260115 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d68478596-8jx82"] Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.355350 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80265cad-1b6f-4dfc-aee2-04a1da6152fc-logs\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.355475 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-public-tls-certs\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.355535 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-internal-tls-certs\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.355598 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-combined-ca-bundle\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.355638 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8wjt\" (UniqueName: \"kubernetes.io/projected/80265cad-1b6f-4dfc-aee2-04a1da6152fc-kube-api-access-g8wjt\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.355695 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-config-data\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.355749 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-config-data-custom\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.458612 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80265cad-1b6f-4dfc-aee2-04a1da6152fc-logs\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.458732 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-public-tls-certs\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.458767 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-internal-tls-certs\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.458815 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-combined-ca-bundle\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.458845 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8wjt\" (UniqueName: \"kubernetes.io/projected/80265cad-1b6f-4dfc-aee2-04a1da6152fc-kube-api-access-g8wjt\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.458886 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-config-data\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.458930 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-config-data-custom\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.465533 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80265cad-1b6f-4dfc-aee2-04a1da6152fc-logs\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.472077 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-public-tls-certs\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.474560 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-internal-tls-certs\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.475211 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-config-data\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.475772 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-combined-ca-bundle\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.485742 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8wjt\" (UniqueName: \"kubernetes.io/projected/80265cad-1b6f-4dfc-aee2-04a1da6152fc-kube-api-access-g8wjt\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.490592 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/80265cad-1b6f-4dfc-aee2-04a1da6152fc-config-data-custom\") pod \"barbican-api-d68478596-8jx82\" (UID: \"80265cad-1b6f-4dfc-aee2-04a1da6152fc\") " pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.599120 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.649532 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0" path="/var/lib/kubelet/pods/ac976b6a-fcd4-4e6c-8d78-c10daadcc3e0/volumes" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.714894 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-b7f654f86-sh94c" Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.828722 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77c4bd4944-jbg88"] Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.829127 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77c4bd4944-jbg88" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon-log" containerID="cri-o://98ac9bd0e996c24ef47fc1358f225bf0638d3c2eb7ec7974d3c9abeb6a0901fa" gracePeriod=30 Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.829967 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-77c4bd4944-jbg88" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon" containerID="cri-o://453c2a84853f09eb6406ce82bcc4f8dd06a8ac4f033e57a7416674cbed96eb48" gracePeriod=30 Dec 11 10:13:03 crc kubenswrapper[4746]: I1211 10:13:03.846691 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77c4bd4944-jbg88" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.019654 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec9180c8-a1f4-49c8-b699-e8be6081edb0","Type":"ContainerStarted","Data":"dc33edf149c87829f0a0b1703d520b40a5cc6d73a082a37be38ce8be64e64ed2"} Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.019813 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" containerName="cinder-api-log" containerID="cri-o://657c81e092410dc723272734c2690973035af2198bfaa777cb1ec07ae10b0b95" gracePeriod=30 Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.020087 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.020347 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" containerName="cinder-api" containerID="cri-o://dc33edf149c87829f0a0b1703d520b40a5cc6d73a082a37be38ce8be64e64ed2" gracePeriod=30 Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.038322 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a804903-a2db-4a38-88e6-53e34f30b44c","Type":"ContainerStarted","Data":"2e7314ecd62125dc99a2adb7adb28e158e23f99db76431b6dd8fc1e7d6f6b464"} Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.050182 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.050162421 podStartE2EDuration="6.050162421s" podCreationTimestamp="2025-12-11 10:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:04.046871243 +0000 UTC m=+1156.906734556" watchObservedRunningTime="2025-12-11 10:13:04.050162421 +0000 UTC m=+1156.910025734" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.451941 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.585749 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-scripts\") pod \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.585876 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5ck4\" (UniqueName: \"kubernetes.io/projected/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-kube-api-access-x5ck4\") pod \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.586240 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-config-data\") pod \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.587428 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-run-httpd\") pod \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.587800 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-sg-core-conf-yaml\") pod \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.587873 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-combined-ca-bundle\") pod \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.591517 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" (UID: "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.700552 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-log-httpd\") pod \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\" (UID: \"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79\") " Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.701731 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.702625 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" (UID: "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.722976 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-kube-api-access-x5ck4" (OuterVolumeSpecName: "kube-api-access-x5ck4") pod "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" (UID: "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79"). InnerVolumeSpecName "kube-api-access-x5ck4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.733176 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-scripts" (OuterVolumeSpecName: "scripts") pod "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" (UID: "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.772643 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" (UID: "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.803441 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.803488 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.803500 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5ck4\" (UniqueName: \"kubernetes.io/projected/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-kube-api-access-x5ck4\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.803511 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.847144 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-config-data" (OuterVolumeSpecName: "config-data") pod "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" (UID: "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.868204 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" (UID: "cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.905168 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:04 crc kubenswrapper[4746]: I1211 10:13:04.905215 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.053161 4746 generic.go:334] "Generic (PLEG): container finished" podID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" containerID="dc33edf149c87829f0a0b1703d520b40a5cc6d73a082a37be38ce8be64e64ed2" exitCode=0 Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.053202 4746 generic.go:334] "Generic (PLEG): container finished" podID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" containerID="657c81e092410dc723272734c2690973035af2198bfaa777cb1ec07ae10b0b95" exitCode=143 Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.053255 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec9180c8-a1f4-49c8-b699-e8be6081edb0","Type":"ContainerDied","Data":"dc33edf149c87829f0a0b1703d520b40a5cc6d73a082a37be38ce8be64e64ed2"} Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.053287 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec9180c8-a1f4-49c8-b699-e8be6081edb0","Type":"ContainerDied","Data":"657c81e092410dc723272734c2690973035af2198bfaa777cb1ec07ae10b0b95"} Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.056909 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79","Type":"ContainerDied","Data":"0ed3a4ffab091f56cf2e6f569aa2363e563d62efb15aa8f2f6d447b184887a94"} Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.056950 4746 scope.go:117] "RemoveContainer" containerID="aae63e1ffbe9b38b987faa84638c081c5d87fb1e3a8c31ffa4cf5f398789b77b" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.057084 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.157590 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.170083 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.179434 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:05 crc kubenswrapper[4746]: E1211 10:13:05.180033 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="ceilometer-notification-agent" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.180073 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="ceilometer-notification-agent" Dec 11 10:13:05 crc kubenswrapper[4746]: E1211 10:13:05.180117 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="proxy-httpd" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.180131 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="proxy-httpd" Dec 11 10:13:05 crc kubenswrapper[4746]: E1211 10:13:05.180150 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="sg-core" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.180159 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="sg-core" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.180413 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="ceilometer-notification-agent" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.180441 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="proxy-httpd" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.180465 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" containerName="sg-core" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.182897 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.185270 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.191264 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.192532 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.212672 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nmd2\" (UniqueName: \"kubernetes.io/projected/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-kube-api-access-8nmd2\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.212754 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-config-data\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.212816 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-log-httpd\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.212852 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-scripts\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.212867 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.212980 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-run-httpd\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.213161 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.314976 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-log-httpd\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.315031 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-scripts\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.315061 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.315085 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-run-httpd\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.315114 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.315176 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nmd2\" (UniqueName: \"kubernetes.io/projected/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-kube-api-access-8nmd2\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.315214 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-config-data\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.317765 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-log-httpd\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.318383 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-run-httpd\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.321882 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.326269 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.326832 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-scripts\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.339491 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nmd2\" (UniqueName: \"kubernetes.io/projected/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-kube-api-access-8nmd2\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.340677 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-config-data\") pod \"ceilometer-0\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.528918 4746 scope.go:117] "RemoveContainer" containerID="19f9c318257ca4d1a480e995a0556dc6a7d53cb02db05c2ecc3badde42bd3dd5" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.530855 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.598501 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-65c49f59b-9mqvh" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.762753 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79" path="/var/lib/kubelet/pods/cfe9fb0a-1c37-4c44-af1b-3b70a1e5fb79/volumes" Dec 11 10:13:05 crc kubenswrapper[4746]: I1211 10:13:05.841184 4746 scope.go:117] "RemoveContainer" containerID="be130564269f4726b1cd5b96890cbde2d701ef99a4ac5a303a09a6fd10f2bd53" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.101444 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"ec9180c8-a1f4-49c8-b699-e8be6081edb0","Type":"ContainerDied","Data":"415d76d5dced14ae4d70f69c9548105cb29437c32674d92d9355e1dd3361da60"} Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.101962 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="415d76d5dced14ae4d70f69c9548105cb29437c32674d92d9355e1dd3361da60" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.164610 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.183832 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-scripts\") pod \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.183901 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data\") pod \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.184095 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-combined-ca-bundle\") pod \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.184147 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec9180c8-a1f4-49c8-b699-e8be6081edb0-etc-machine-id\") pod \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.184232 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdf5l\" (UniqueName: \"kubernetes.io/projected/ec9180c8-a1f4-49c8-b699-e8be6081edb0-kube-api-access-mdf5l\") pod \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.184305 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9180c8-a1f4-49c8-b699-e8be6081edb0-logs\") pod \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.184335 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data-custom\") pod \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\" (UID: \"ec9180c8-a1f4-49c8-b699-e8be6081edb0\") " Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.187216 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ec9180c8-a1f4-49c8-b699-e8be6081edb0-logs" (OuterVolumeSpecName: "logs") pod "ec9180c8-a1f4-49c8-b699-e8be6081edb0" (UID: "ec9180c8-a1f4-49c8-b699-e8be6081edb0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.189574 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec9180c8-a1f4-49c8-b699-e8be6081edb0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "ec9180c8-a1f4-49c8-b699-e8be6081edb0" (UID: "ec9180c8-a1f4-49c8-b699-e8be6081edb0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.235004 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-scripts" (OuterVolumeSpecName: "scripts") pod "ec9180c8-a1f4-49c8-b699-e8be6081edb0" (UID: "ec9180c8-a1f4-49c8-b699-e8be6081edb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.235244 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "ec9180c8-a1f4-49c8-b699-e8be6081edb0" (UID: "ec9180c8-a1f4-49c8-b699-e8be6081edb0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.248317 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec9180c8-a1f4-49c8-b699-e8be6081edb0-kube-api-access-mdf5l" (OuterVolumeSpecName: "kube-api-access-mdf5l") pod "ec9180c8-a1f4-49c8-b699-e8be6081edb0" (UID: "ec9180c8-a1f4-49c8-b699-e8be6081edb0"). InnerVolumeSpecName "kube-api-access-mdf5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.289378 4746 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/ec9180c8-a1f4-49c8-b699-e8be6081edb0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.289424 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdf5l\" (UniqueName: \"kubernetes.io/projected/ec9180c8-a1f4-49c8-b699-e8be6081edb0-kube-api-access-mdf5l\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.289438 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ec9180c8-a1f4-49c8-b699-e8be6081edb0-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.289451 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.289463 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:06 crc kubenswrapper[4746]: E1211 10:13:06.364366 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25e7a0d4_d4a7_4a8e_898a_57fdbd3c2101.slice/crio-eb638379b914e22cfce30de40cb2cf4ebe7022713c56d8faffa7195121e1bd27.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25e7a0d4_d4a7_4a8e_898a_57fdbd3c2101.slice/crio-conmon-eb638379b914e22cfce30de40cb2cf4ebe7022713c56d8faffa7195121e1bd27.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.466146 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec9180c8-a1f4-49c8-b699-e8be6081edb0" (UID: "ec9180c8-a1f4-49c8-b699-e8be6081edb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.496788 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.533289 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data" (OuterVolumeSpecName: "config-data") pod "ec9180c8-a1f4-49c8-b699-e8be6081edb0" (UID: "ec9180c8-a1f4-49c8-b699-e8be6081edb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.570085 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d68478596-8jx82"] Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.599530 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec9180c8-a1f4-49c8-b699-e8be6081edb0-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:06 crc kubenswrapper[4746]: I1211 10:13:06.770696 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.128482 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55c6884c59-pb7cb" event={"ID":"17f2e937-e45d-48e4-be34-f013cb61dc7e","Type":"ContainerStarted","Data":"cec181307c94a532446da5a5784dcc228cf9ed34766a9eff1c6f4dc435635e3f"} Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.129701 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68478596-8jx82" event={"ID":"80265cad-1b6f-4dfc-aee2-04a1da6152fc","Type":"ContainerStarted","Data":"90ce0b566c4f0839a459aa6d77e7aa131fc9e303ef1d7460b78f8bc6aaaac805"} Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.130966 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9963db-bd42-4b27-8fae-8d2f99a3db1e","Type":"ContainerStarted","Data":"1a96300c66850e6ed4dc17b460034428162861261c6f2c5d7bd130f435dfd67f"} Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.139953 4746 generic.go:334] "Generic (PLEG): container finished" podID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" containerID="eb638379b914e22cfce30de40cb2cf4ebe7022713c56d8faffa7195121e1bd27" exitCode=0 Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.140025 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548544f678-9jrhc" event={"ID":"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101","Type":"ContainerDied","Data":"eb638379b914e22cfce30de40cb2cf4ebe7022713c56d8faffa7195121e1bd27"} Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.149719 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.149721 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" event={"ID":"6346d2a5-4279-407e-981e-423993612a5c","Type":"ContainerStarted","Data":"3fd9fdc0f1fd7f8918a6f3413c495ef71fa732f457089b3657148b7ae65e53ff"} Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.399865 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.536109 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.546681 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.609109 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:13:07 crc kubenswrapper[4746]: E1211 10:13:07.609592 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" containerName="cinder-api-log" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.609604 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" containerName="cinder-api-log" Dec 11 10:13:07 crc kubenswrapper[4746]: E1211 10:13:07.609617 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" containerName="cinder-api" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.609623 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" containerName="cinder-api" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.609806 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" containerName="cinder-api" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.609831 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" containerName="cinder-api-log" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.610908 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.625561 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.626064 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.626425 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.643288 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-config-data\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.643372 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93766680-5fd5-4cc4-9ab8-128daeec573d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.643399 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.648514 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-scripts\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.648571 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.650935 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9d79\" (UniqueName: \"kubernetes.io/projected/93766680-5fd5-4cc4-9ab8-128daeec573d-kube-api-access-b9d79\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.651627 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-config-data-custom\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.651773 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.651853 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93766680-5fd5-4cc4-9ab8-128daeec573d-logs\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.854346 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9d79\" (UniqueName: \"kubernetes.io/projected/93766680-5fd5-4cc4-9ab8-128daeec573d-kube-api-access-b9d79\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.854407 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-config-data-custom\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.854466 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.854498 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93766680-5fd5-4cc4-9ab8-128daeec573d-logs\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.854529 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-config-data\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.854596 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93766680-5fd5-4cc4-9ab8-128daeec573d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.854619 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.854676 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-scripts\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.854698 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.855445 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/93766680-5fd5-4cc4-9ab8-128daeec573d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.856024 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93766680-5fd5-4cc4-9ab8-128daeec573d-logs\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.856926 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec9180c8-a1f4-49c8-b699-e8be6081edb0" path="/var/lib/kubelet/pods/ec9180c8-a1f4-49c8-b699-e8be6081edb0/volumes" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.875166 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.878220 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-config-data\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.891194 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.897086 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.897427 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-config-data-custom\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.899682 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.906573 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93766680-5fd5-4cc4-9ab8-128daeec573d-scripts\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:07 crc kubenswrapper[4746]: I1211 10:13:07.907295 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9d79\" (UniqueName: \"kubernetes.io/projected/93766680-5fd5-4cc4-9ab8-128daeec573d-kube-api-access-b9d79\") pod \"cinder-api-0\" (UID: \"93766680-5fd5-4cc4-9ab8-128daeec573d\") " pod="openstack/cinder-api-0" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.146616 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.167095 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-55c6884c59-pb7cb" event={"ID":"17f2e937-e45d-48e4-be34-f013cb61dc7e","Type":"ContainerStarted","Data":"a0de5b1d1df921ab75e78dd6be689ec9332e0a5f6f5bd036097d58d27b0301b8"} Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.183990 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68478596-8jx82" event={"ID":"80265cad-1b6f-4dfc-aee2-04a1da6152fc","Type":"ContainerStarted","Data":"86fecbc25da35e554024fb6ff237c659c29aab3e0a53264b2d99d9924b442d01"} Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.201122 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-548544f678-9jrhc" event={"ID":"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101","Type":"ContainerDied","Data":"5a9811607faabd467b4098b45031716dd077ec3860d8090089b7fc61b127c4bd"} Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.201171 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a9811607faabd467b4098b45031716dd077ec3860d8090089b7fc61b127c4bd" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.202861 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-55c6884c59-pb7cb" podStartSLOduration=7.764432528 podStartE2EDuration="13.202838396s" podCreationTimestamp="2025-12-11 10:12:55 +0000 UTC" firstStartedPulling="2025-12-11 10:13:00.090796108 +0000 UTC m=+1152.950659421" lastFinishedPulling="2025-12-11 10:13:05.529201976 +0000 UTC m=+1158.389065289" observedRunningTime="2025-12-11 10:13:08.193564676 +0000 UTC m=+1161.053427989" watchObservedRunningTime="2025-12-11 10:13:08.202838396 +0000 UTC m=+1161.062701699" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.244132 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.245500 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.258911 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.262604 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6mqbj" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.262823 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.262937 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.366849 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6zk8\" (UniqueName: \"kubernetes.io/projected/f39575aa-fcfa-42ad-aceb-a8611602030f-kube-api-access-z6zk8\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.366943 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f39575aa-fcfa-42ad-aceb-a8611602030f-openstack-config-secret\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.367540 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39575aa-fcfa-42ad-aceb-a8611602030f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.367729 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f39575aa-fcfa-42ad-aceb-a8611602030f-openstack-config\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.433219 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.473520 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39575aa-fcfa-42ad-aceb-a8611602030f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.473583 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f39575aa-fcfa-42ad-aceb-a8611602030f-openstack-config\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.473642 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6zk8\" (UniqueName: \"kubernetes.io/projected/f39575aa-fcfa-42ad-aceb-a8611602030f-kube-api-access-z6zk8\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.473664 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f39575aa-fcfa-42ad-aceb-a8611602030f-openstack-config-secret\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.475921 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/f39575aa-fcfa-42ad-aceb-a8611602030f-openstack-config\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.490119 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f39575aa-fcfa-42ad-aceb-a8611602030f-combined-ca-bundle\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.494683 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/f39575aa-fcfa-42ad-aceb-a8611602030f-openstack-config-secret\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.510274 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6zk8\" (UniqueName: \"kubernetes.io/projected/f39575aa-fcfa-42ad-aceb-a8611602030f-kube-api-access-z6zk8\") pod \"openstackclient\" (UID: \"f39575aa-fcfa-42ad-aceb-a8611602030f\") " pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.562866 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-vm6ft"] Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.567684 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" podUID="b3ee6299-beee-4379-86e4-89b33e6e11d0" containerName="dnsmasq-dns" containerID="cri-o://1e76c7607f91642a298a018feb57611053d8ba931362e37d6698a32f1f74e771" gracePeriod=10 Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.845416 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.864095 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 11 10:13:08 crc kubenswrapper[4746]: I1211 10:13:08.999941 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-ovndb-tls-certs\") pod \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.000141 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-combined-ca-bundle\") pod \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.000218 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwb4v\" (UniqueName: \"kubernetes.io/projected/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-kube-api-access-nwb4v\") pod \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.000305 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-httpd-config\") pod \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.000369 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-config\") pod \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\" (UID: \"25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.009626 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" (UID: "25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.059617 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-kube-api-access-nwb4v" (OuterVolumeSpecName: "kube-api-access-nwb4v") pod "25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" (UID: "25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101"). InnerVolumeSpecName "kube-api-access-nwb4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.105969 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwb4v\" (UniqueName: \"kubernetes.io/projected/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-kube-api-access-nwb4v\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.106002 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.132501 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.250341 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"93766680-5fd5-4cc4-9ab8-128daeec573d","Type":"ContainerStarted","Data":"ed0fb922918d05399e9824f5977491b49f7334acdcf867a42ad91198f18c767d"} Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.267224 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" event={"ID":"6346d2a5-4279-407e-981e-423993612a5c","Type":"ContainerStarted","Data":"ce54c3cc7769f043e0b323264a9394efdf1e2e23df3b5dbc3f313a1276ae36d7"} Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.277249 4746 generic.go:334] "Generic (PLEG): container finished" podID="b3ee6299-beee-4379-86e4-89b33e6e11d0" containerID="1e76c7607f91642a298a018feb57611053d8ba931362e37d6698a32f1f74e771" exitCode=0 Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.277330 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" event={"ID":"b3ee6299-beee-4379-86e4-89b33e6e11d0","Type":"ContainerDied","Data":"1e76c7607f91642a298a018feb57611053d8ba931362e37d6698a32f1f74e771"} Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.297185 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a804903-a2db-4a38-88e6-53e34f30b44c","Type":"ContainerStarted","Data":"1e3396cf06dc080c3c4faa3009ae507ab0c10fb7366060c713ea7934d50b0e4d"} Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.297261 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-548544f678-9jrhc" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.318201 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5d4579cd86-47qwg" podStartSLOduration=8.588912457 podStartE2EDuration="14.318174614s" podCreationTimestamp="2025-12-11 10:12:55 +0000 UTC" firstStartedPulling="2025-12-11 10:12:59.728066648 +0000 UTC m=+1152.587929961" lastFinishedPulling="2025-12-11 10:13:05.457328805 +0000 UTC m=+1158.317192118" observedRunningTime="2025-12-11 10:13:09.297855308 +0000 UTC m=+1162.157718641" watchObservedRunningTime="2025-12-11 10:13:09.318174614 +0000 UTC m=+1162.178037927" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.323267 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.357737 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=10.780419214 podStartE2EDuration="12.357715216s" podCreationTimestamp="2025-12-11 10:12:57 +0000 UTC" firstStartedPulling="2025-12-11 10:13:00.164171948 +0000 UTC m=+1153.024035261" lastFinishedPulling="2025-12-11 10:13:01.74146795 +0000 UTC m=+1154.601331263" observedRunningTime="2025-12-11 10:13:09.332495448 +0000 UTC m=+1162.192358761" watchObservedRunningTime="2025-12-11 10:13:09.357715216 +0000 UTC m=+1162.217578529" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.416357 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-config" (OuterVolumeSpecName: "config") pod "25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" (UID: "25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.421924 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-config\") pod \"b3ee6299-beee-4379-86e4-89b33e6e11d0\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.422179 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-sb\") pod \"b3ee6299-beee-4379-86e4-89b33e6e11d0\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.422214 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-nb\") pod \"b3ee6299-beee-4379-86e4-89b33e6e11d0\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.422285 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-svc\") pod \"b3ee6299-beee-4379-86e4-89b33e6e11d0\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.422341 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-swift-storage-0\") pod \"b3ee6299-beee-4379-86e4-89b33e6e11d0\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.422369 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg6ns\" (UniqueName: \"kubernetes.io/projected/b3ee6299-beee-4379-86e4-89b33e6e11d0-kube-api-access-qg6ns\") pod \"b3ee6299-beee-4379-86e4-89b33e6e11d0\" (UID: \"b3ee6299-beee-4379-86e4-89b33e6e11d0\") " Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.423002 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.467174 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3ee6299-beee-4379-86e4-89b33e6e11d0-kube-api-access-qg6ns" (OuterVolumeSpecName: "kube-api-access-qg6ns") pod "b3ee6299-beee-4379-86e4-89b33e6e11d0" (UID: "b3ee6299-beee-4379-86e4-89b33e6e11d0"). InnerVolumeSpecName "kube-api-access-qg6ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.530592 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg6ns\" (UniqueName: \"kubernetes.io/projected/b3ee6299-beee-4379-86e4-89b33e6e11d0-kube-api-access-qg6ns\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.607117 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.628239 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" (UID: "25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.633309 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" (UID: "25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.674503 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.674540 4746 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.799097 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b3ee6299-beee-4379-86e4-89b33e6e11d0" (UID: "b3ee6299-beee-4379-86e4-89b33e6e11d0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.824638 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-config" (OuterVolumeSpecName: "config") pod "b3ee6299-beee-4379-86e4-89b33e6e11d0" (UID: "b3ee6299-beee-4379-86e4-89b33e6e11d0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.848464 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b3ee6299-beee-4379-86e4-89b33e6e11d0" (UID: "b3ee6299-beee-4379-86e4-89b33e6e11d0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.878680 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b3ee6299-beee-4379-86e4-89b33e6e11d0" (UID: "b3ee6299-beee-4379-86e4-89b33e6e11d0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.880023 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.880055 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.880067 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.880077 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.899805 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b3ee6299-beee-4379-86e4-89b33e6e11d0" (UID: "b3ee6299-beee-4379-86e4-89b33e6e11d0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.948349 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-548544f678-9jrhc"] Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.973430 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-548544f678-9jrhc"] Dec 11 10:13:09 crc kubenswrapper[4746]: I1211 10:13:09.982348 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b3ee6299-beee-4379-86e4-89b33e6e11d0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.330209 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f39575aa-fcfa-42ad-aceb-a8611602030f","Type":"ContainerStarted","Data":"f996856f671dad400edd4a7509d6e119e8ddd9fd7c4e36467a561bd6ed225409"} Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.335677 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.359554 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d68478596-8jx82" event={"ID":"80265cad-1b6f-4dfc-aee2-04a1da6152fc","Type":"ContainerStarted","Data":"41ec847ed9b093a6b9275267f9cbcf16c1dd6395222d66a5c3a02f644ae71e60"} Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.361201 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.361258 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.379636 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9963db-bd42-4b27-8fae-8d2f99a3db1e","Type":"ContainerStarted","Data":"6103cee71f225ddff31578e0fde397685cf2218d226db9588d3d48137f7bb27c"} Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.396704 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d68478596-8jx82" podStartSLOduration=7.396678383 podStartE2EDuration="7.396678383s" podCreationTimestamp="2025-12-11 10:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:10.390800175 +0000 UTC m=+1163.250663488" watchObservedRunningTime="2025-12-11 10:13:10.396678383 +0000 UTC m=+1163.256541696" Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.397803 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.399876 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-vm6ft" event={"ID":"b3ee6299-beee-4379-86e4-89b33e6e11d0","Type":"ContainerDied","Data":"762ebc00f9d6653b01b7294143f5a673c3e519570ce4417716ea684b92ce2b20"} Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.399969 4746 scope.go:117] "RemoveContainer" containerID="1e76c7607f91642a298a018feb57611053d8ba931362e37d6698a32f1f74e771" Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.443258 4746 scope.go:117] "RemoveContainer" containerID="cece8af58d55056fa070b9d99636f11b09102e8cf9c5a2e6a166ebbbe61b35ac" Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.458137 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-vm6ft"] Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.466396 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-vm6ft"] Dec 11 10:13:10 crc kubenswrapper[4746]: I1211 10:13:10.722709 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:13:11 crc kubenswrapper[4746]: I1211 10:13:11.353921 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77c4bd4944-jbg88" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:52384->10.217.0.145:8443: read: connection reset by peer" Dec 11 10:13:11 crc kubenswrapper[4746]: I1211 10:13:11.428719 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9963db-bd42-4b27-8fae-8d2f99a3db1e","Type":"ContainerStarted","Data":"45d1f378acb28195ced7fd82cf7ad8e025fe11e60d035a7b35cd8f4a3501783a"} Dec 11 10:13:11 crc kubenswrapper[4746]: I1211 10:13:11.451149 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"93766680-5fd5-4cc4-9ab8-128daeec573d","Type":"ContainerStarted","Data":"c5eebec2aaa851a56222008ad88a6088f6b54a11dc9208a5c86af19aa237f6fc"} Dec 11 10:13:11 crc kubenswrapper[4746]: I1211 10:13:11.646343 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" path="/var/lib/kubelet/pods/25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101/volumes" Dec 11 10:13:11 crc kubenswrapper[4746]: I1211 10:13:11.647385 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3ee6299-beee-4379-86e4-89b33e6e11d0" path="/var/lib/kubelet/pods/b3ee6299-beee-4379-86e4-89b33e6e11d0/volumes" Dec 11 10:13:12 crc kubenswrapper[4746]: I1211 10:13:12.486314 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9963db-bd42-4b27-8fae-8d2f99a3db1e","Type":"ContainerStarted","Data":"d3d27e5f0a1561e3045273a1188d91024dfbde5478a126d5406f7abe115bd56e"} Dec 11 10:13:12 crc kubenswrapper[4746]: I1211 10:13:12.536924 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"93766680-5fd5-4cc4-9ab8-128daeec573d","Type":"ContainerStarted","Data":"9657e9495b8963646ed05b7b1289f392a70364ab43db438f85f354f2f6d309a3"} Dec 11 10:13:12 crc kubenswrapper[4746]: I1211 10:13:12.537196 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 11 10:13:12 crc kubenswrapper[4746]: I1211 10:13:12.553732 4746 generic.go:334] "Generic (PLEG): container finished" podID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerID="453c2a84853f09eb6406ce82bcc4f8dd06a8ac4f033e57a7416674cbed96eb48" exitCode=0 Dec 11 10:13:12 crc kubenswrapper[4746]: I1211 10:13:12.556369 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c4bd4944-jbg88" event={"ID":"016a98db-33b2-4acb-a360-5e8a55aebd6c","Type":"ContainerDied","Data":"453c2a84853f09eb6406ce82bcc4f8dd06a8ac4f033e57a7416674cbed96eb48"} Dec 11 10:13:12 crc kubenswrapper[4746]: I1211 10:13:12.592707 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.592683678 podStartE2EDuration="5.592683678s" podCreationTimestamp="2025-12-11 10:13:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:12.581888519 +0000 UTC m=+1165.441751832" watchObservedRunningTime="2025-12-11 10:13:12.592683678 +0000 UTC m=+1165.452546991" Dec 11 10:13:13 crc kubenswrapper[4746]: I1211 10:13:13.301726 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 10:13:13 crc kubenswrapper[4746]: I1211 10:13:13.570731 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 10:13:13 crc kubenswrapper[4746]: I1211 10:13:13.647495 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:13:14 crc kubenswrapper[4746]: I1211 10:13:14.580003 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9963db-bd42-4b27-8fae-8d2f99a3db1e","Type":"ContainerStarted","Data":"b56ba54be38f89b4417ed65bcf05be89600ff0ef38aacab93c6c003a87879ee1"} Dec 11 10:13:14 crc kubenswrapper[4746]: I1211 10:13:14.580192 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2a804903-a2db-4a38-88e6-53e34f30b44c" containerName="cinder-scheduler" containerID="cri-o://2e7314ecd62125dc99a2adb7adb28e158e23f99db76431b6dd8fc1e7d6f6b464" gracePeriod=30 Dec 11 10:13:14 crc kubenswrapper[4746]: I1211 10:13:14.580242 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="2a804903-a2db-4a38-88e6-53e34f30b44c" containerName="probe" containerID="cri-o://1e3396cf06dc080c3c4faa3009ae507ab0c10fb7366060c713ea7934d50b0e4d" gracePeriod=30 Dec 11 10:13:15 crc kubenswrapper[4746]: I1211 10:13:15.403505 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:15 crc kubenswrapper[4746]: I1211 10:13:15.438880 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.837637281 podStartE2EDuration="10.438853121s" podCreationTimestamp="2025-12-11 10:13:05 +0000 UTC" firstStartedPulling="2025-12-11 10:13:06.812472933 +0000 UTC m=+1159.672336246" lastFinishedPulling="2025-12-11 10:13:13.413688773 +0000 UTC m=+1166.273552086" observedRunningTime="2025-12-11 10:13:14.616511551 +0000 UTC m=+1167.476374864" watchObservedRunningTime="2025-12-11 10:13:15.438853121 +0000 UTC m=+1168.298716434" Dec 11 10:13:15 crc kubenswrapper[4746]: I1211 10:13:15.593350 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.054128 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7cc45cfb45-bbbq8"] Dec 11 10:13:16 crc kubenswrapper[4746]: E1211 10:13:16.055373 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee6299-beee-4379-86e4-89b33e6e11d0" containerName="dnsmasq-dns" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.055452 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee6299-beee-4379-86e4-89b33e6e11d0" containerName="dnsmasq-dns" Dec 11 10:13:16 crc kubenswrapper[4746]: E1211 10:13:16.055488 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" containerName="neutron-api" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.055496 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" containerName="neutron-api" Dec 11 10:13:16 crc kubenswrapper[4746]: E1211 10:13:16.055524 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" containerName="neutron-httpd" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.055531 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" containerName="neutron-httpd" Dec 11 10:13:16 crc kubenswrapper[4746]: E1211 10:13:16.055550 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3ee6299-beee-4379-86e4-89b33e6e11d0" containerName="init" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.055558 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3ee6299-beee-4379-86e4-89b33e6e11d0" containerName="init" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.055848 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3ee6299-beee-4379-86e4-89b33e6e11d0" containerName="dnsmasq-dns" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.055872 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" containerName="neutron-httpd" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.055887 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="25e7a0d4-d4a7-4a8e-898a-57fdbd3c2101" containerName="neutron-api" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.057739 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.065937 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.065978 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.066140 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.085886 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7cc45cfb45-bbbq8"] Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.120129 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-log-httpd\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.120221 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-internal-tls-certs\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.120289 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-run-httpd\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.120335 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-combined-ca-bundle\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.120401 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-etc-swift\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.120556 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-public-tls-certs\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.120695 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-config-data\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.120735 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpvq6\" (UniqueName: \"kubernetes.io/projected/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-kube-api-access-qpvq6\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.156571 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77c4bd4944-jbg88" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.225250 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-internal-tls-certs\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.225323 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-run-httpd\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.225354 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-combined-ca-bundle\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.225403 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-etc-swift\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.225430 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-public-tls-certs\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.225462 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-config-data\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.225482 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpvq6\" (UniqueName: \"kubernetes.io/projected/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-kube-api-access-qpvq6\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.225517 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-log-httpd\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.225962 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-log-httpd\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.227521 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-run-httpd\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.249585 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-etc-swift\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.251941 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-combined-ca-bundle\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.253985 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-public-tls-certs\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.257141 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-internal-tls-certs\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.257307 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpvq6\" (UniqueName: \"kubernetes.io/projected/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-kube-api-access-qpvq6\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.260003 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c65e9de-7890-47aa-bcf7-48cdfd6dd262-config-data\") pod \"swift-proxy-7cc45cfb45-bbbq8\" (UID: \"5c65e9de-7890-47aa-bcf7-48cdfd6dd262\") " pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.388002 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.605184 4746 generic.go:334] "Generic (PLEG): container finished" podID="2a804903-a2db-4a38-88e6-53e34f30b44c" containerID="1e3396cf06dc080c3c4faa3009ae507ab0c10fb7366060c713ea7934d50b0e4d" exitCode=0 Dec 11 10:13:16 crc kubenswrapper[4746]: I1211 10:13:16.606880 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a804903-a2db-4a38-88e6-53e34f30b44c","Type":"ContainerDied","Data":"1e3396cf06dc080c3c4faa3009ae507ab0c10fb7366060c713ea7934d50b0e4d"} Dec 11 10:13:17 crc kubenswrapper[4746]: I1211 10:13:17.166671 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d68478596-8jx82" Dec 11 10:13:17 crc kubenswrapper[4746]: I1211 10:13:17.233440 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b5f77766d-fffkr"] Dec 11 10:13:17 crc kubenswrapper[4746]: I1211 10:13:17.238528 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b5f77766d-fffkr" podUID="360eee01-6461-4d34-b011-888f1f1026ac" containerName="barbican-api-log" containerID="cri-o://b346cef92731e27d2fdf01205d4e416356b77666acb8fae6f8950a703ae5649a" gracePeriod=30 Dec 11 10:13:17 crc kubenswrapper[4746]: I1211 10:13:17.238666 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7b5f77766d-fffkr" podUID="360eee01-6461-4d34-b011-888f1f1026ac" containerName="barbican-api" containerID="cri-o://ce0a3a416f1d2ac59317e18b6d063b00f2d2adc7c6f1428893f28e77dac86eed" gracePeriod=30 Dec 11 10:13:17 crc kubenswrapper[4746]: I1211 10:13:17.636596 4746 generic.go:334] "Generic (PLEG): container finished" podID="2a804903-a2db-4a38-88e6-53e34f30b44c" containerID="2e7314ecd62125dc99a2adb7adb28e158e23f99db76431b6dd8fc1e7d6f6b464" exitCode=0 Dec 11 10:13:17 crc kubenswrapper[4746]: I1211 10:13:17.642763 4746 generic.go:334] "Generic (PLEG): container finished" podID="360eee01-6461-4d34-b011-888f1f1026ac" containerID="b346cef92731e27d2fdf01205d4e416356b77666acb8fae6f8950a703ae5649a" exitCode=143 Dec 11 10:13:17 crc kubenswrapper[4746]: I1211 10:13:17.655002 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a804903-a2db-4a38-88e6-53e34f30b44c","Type":"ContainerDied","Data":"2e7314ecd62125dc99a2adb7adb28e158e23f99db76431b6dd8fc1e7d6f6b464"} Dec 11 10:13:17 crc kubenswrapper[4746]: I1211 10:13:17.655101 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f77766d-fffkr" event={"ID":"360eee01-6461-4d34-b011-888f1f1026ac","Type":"ContainerDied","Data":"b346cef92731e27d2fdf01205d4e416356b77666acb8fae6f8950a703ae5649a"} Dec 11 10:13:18 crc kubenswrapper[4746]: I1211 10:13:18.574899 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:18 crc kubenswrapper[4746]: I1211 10:13:18.575440 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="ceilometer-central-agent" containerID="cri-o://6103cee71f225ddff31578e0fde397685cf2218d226db9588d3d48137f7bb27c" gracePeriod=30 Dec 11 10:13:18 crc kubenswrapper[4746]: I1211 10:13:18.575568 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="ceilometer-notification-agent" containerID="cri-o://45d1f378acb28195ced7fd82cf7ad8e025fe11e60d035a7b35cd8f4a3501783a" gracePeriod=30 Dec 11 10:13:18 crc kubenswrapper[4746]: I1211 10:13:18.575553 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="sg-core" containerID="cri-o://d3d27e5f0a1561e3045273a1188d91024dfbde5478a126d5406f7abe115bd56e" gracePeriod=30 Dec 11 10:13:18 crc kubenswrapper[4746]: I1211 10:13:18.575742 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="proxy-httpd" containerID="cri-o://b56ba54be38f89b4417ed65bcf05be89600ff0ef38aacab93c6c003a87879ee1" gracePeriod=30 Dec 11 10:13:19 crc kubenswrapper[4746]: I1211 10:13:19.722384 4746 generic.go:334] "Generic (PLEG): container finished" podID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerID="b56ba54be38f89b4417ed65bcf05be89600ff0ef38aacab93c6c003a87879ee1" exitCode=0 Dec 11 10:13:19 crc kubenswrapper[4746]: I1211 10:13:19.722703 4746 generic.go:334] "Generic (PLEG): container finished" podID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerID="d3d27e5f0a1561e3045273a1188d91024dfbde5478a126d5406f7abe115bd56e" exitCode=2 Dec 11 10:13:19 crc kubenswrapper[4746]: I1211 10:13:19.722718 4746 generic.go:334] "Generic (PLEG): container finished" podID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerID="45d1f378acb28195ced7fd82cf7ad8e025fe11e60d035a7b35cd8f4a3501783a" exitCode=0 Dec 11 10:13:19 crc kubenswrapper[4746]: I1211 10:13:19.722709 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9963db-bd42-4b27-8fae-8d2f99a3db1e","Type":"ContainerDied","Data":"b56ba54be38f89b4417ed65bcf05be89600ff0ef38aacab93c6c003a87879ee1"} Dec 11 10:13:19 crc kubenswrapper[4746]: I1211 10:13:19.722764 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9963db-bd42-4b27-8fae-8d2f99a3db1e","Type":"ContainerDied","Data":"d3d27e5f0a1561e3045273a1188d91024dfbde5478a126d5406f7abe115bd56e"} Dec 11 10:13:19 crc kubenswrapper[4746]: I1211 10:13:19.722778 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9963db-bd42-4b27-8fae-8d2f99a3db1e","Type":"ContainerDied","Data":"45d1f378acb28195ced7fd82cf7ad8e025fe11e60d035a7b35cd8f4a3501783a"} Dec 11 10:13:19 crc kubenswrapper[4746]: I1211 10:13:19.722788 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9963db-bd42-4b27-8fae-8d2f99a3db1e","Type":"ContainerDied","Data":"6103cee71f225ddff31578e0fde397685cf2218d226db9588d3d48137f7bb27c"} Dec 11 10:13:19 crc kubenswrapper[4746]: I1211 10:13:19.722731 4746 generic.go:334] "Generic (PLEG): container finished" podID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerID="6103cee71f225ddff31578e0fde397685cf2218d226db9588d3d48137f7bb27c" exitCode=0 Dec 11 10:13:20 crc kubenswrapper[4746]: I1211 10:13:20.798940 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 11 10:13:21 crc kubenswrapper[4746]: I1211 10:13:21.735928 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b5f77766d-fffkr" podUID="360eee01-6461-4d34-b011-888f1f1026ac" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Dec 11 10:13:21 crc kubenswrapper[4746]: I1211 10:13:21.736026 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7b5f77766d-fffkr" podUID="360eee01-6461-4d34-b011-888f1f1026ac" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": dial tcp 10.217.0.161:9311: connect: connection refused" Dec 11 10:13:21 crc kubenswrapper[4746]: I1211 10:13:21.754068 4746 generic.go:334] "Generic (PLEG): container finished" podID="360eee01-6461-4d34-b011-888f1f1026ac" containerID="ce0a3a416f1d2ac59317e18b6d063b00f2d2adc7c6f1428893f28e77dac86eed" exitCode=0 Dec 11 10:13:21 crc kubenswrapper[4746]: I1211 10:13:21.754100 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f77766d-fffkr" event={"ID":"360eee01-6461-4d34-b011-888f1f1026ac","Type":"ContainerDied","Data":"ce0a3a416f1d2ac59317e18b6d063b00f2d2adc7c6f1428893f28e77dac86eed"} Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.138132 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.243203 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-combined-ca-bundle\") pod \"360eee01-6461-4d34-b011-888f1f1026ac\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.243651 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fkts\" (UniqueName: \"kubernetes.io/projected/360eee01-6461-4d34-b011-888f1f1026ac-kube-api-access-9fkts\") pod \"360eee01-6461-4d34-b011-888f1f1026ac\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.243825 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data\") pod \"360eee01-6461-4d34-b011-888f1f1026ac\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.243876 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data-custom\") pod \"360eee01-6461-4d34-b011-888f1f1026ac\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.244025 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360eee01-6461-4d34-b011-888f1f1026ac-logs\") pod \"360eee01-6461-4d34-b011-888f1f1026ac\" (UID: \"360eee01-6461-4d34-b011-888f1f1026ac\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.245627 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/360eee01-6461-4d34-b011-888f1f1026ac-logs" (OuterVolumeSpecName: "logs") pod "360eee01-6461-4d34-b011-888f1f1026ac" (UID: "360eee01-6461-4d34-b011-888f1f1026ac"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.245976 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/360eee01-6461-4d34-b011-888f1f1026ac-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.249683 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/360eee01-6461-4d34-b011-888f1f1026ac-kube-api-access-9fkts" (OuterVolumeSpecName: "kube-api-access-9fkts") pod "360eee01-6461-4d34-b011-888f1f1026ac" (UID: "360eee01-6461-4d34-b011-888f1f1026ac"). InnerVolumeSpecName "kube-api-access-9fkts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.272996 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "360eee01-6461-4d34-b011-888f1f1026ac" (UID: "360eee01-6461-4d34-b011-888f1f1026ac"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.295932 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "360eee01-6461-4d34-b011-888f1f1026ac" (UID: "360eee01-6461-4d34-b011-888f1f1026ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.297988 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.299193 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.334244 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data" (OuterVolumeSpecName: "config-data") pod "360eee01-6461-4d34-b011-888f1f1026ac" (UID: "360eee01-6461-4d34-b011-888f1f1026ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.355281 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.355317 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.355334 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/360eee01-6461-4d34-b011-888f1f1026ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.355344 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fkts\" (UniqueName: \"kubernetes.io/projected/360eee01-6461-4d34-b011-888f1f1026ac-kube-api-access-9fkts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.416959 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.417417 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" containerName="glance-log" containerID="cri-o://ef73ad5277a8ae25b601d0a226efffd25c2e8af63b5f422d620e4e0ab40faa96" gracePeriod=30 Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.418008 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" containerName="glance-httpd" containerID="cri-o://34026be8ede3ae84458859791cf2749046221b44264c909559b8e4bde022a7dd" gracePeriod=30 Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.458710 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data\") pod \"2a804903-a2db-4a38-88e6-53e34f30b44c\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.458777 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a804903-a2db-4a38-88e6-53e34f30b44c-etc-machine-id\") pod \"2a804903-a2db-4a38-88e6-53e34f30b44c\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.458828 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-scripts\") pod \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.458889 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-config-data\") pod \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.458914 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-run-httpd\") pod \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.458943 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvc8x\" (UniqueName: \"kubernetes.io/projected/2a804903-a2db-4a38-88e6-53e34f30b44c-kube-api-access-bvc8x\") pod \"2a804903-a2db-4a38-88e6-53e34f30b44c\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.458994 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-scripts\") pod \"2a804903-a2db-4a38-88e6-53e34f30b44c\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.459017 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data-custom\") pod \"2a804903-a2db-4a38-88e6-53e34f30b44c\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.459081 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-sg-core-conf-yaml\") pod \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.459571 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a804903-a2db-4a38-88e6-53e34f30b44c-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2a804903-a2db-4a38-88e6-53e34f30b44c" (UID: "2a804903-a2db-4a38-88e6-53e34f30b44c"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.459919 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4e9963db-bd42-4b27-8fae-8d2f99a3db1e" (UID: "4e9963db-bd42-4b27-8fae-8d2f99a3db1e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.460751 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nmd2\" (UniqueName: \"kubernetes.io/projected/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-kube-api-access-8nmd2\") pod \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.460867 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-combined-ca-bundle\") pod \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.461151 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-combined-ca-bundle\") pod \"2a804903-a2db-4a38-88e6-53e34f30b44c\" (UID: \"2a804903-a2db-4a38-88e6-53e34f30b44c\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.461227 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-log-httpd\") pod \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\" (UID: \"4e9963db-bd42-4b27-8fae-8d2f99a3db1e\") " Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.461979 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.464703 4746 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2a804903-a2db-4a38-88e6-53e34f30b44c-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.465000 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4e9963db-bd42-4b27-8fae-8d2f99a3db1e" (UID: "4e9963db-bd42-4b27-8fae-8d2f99a3db1e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.468658 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-scripts" (OuterVolumeSpecName: "scripts") pod "2a804903-a2db-4a38-88e6-53e34f30b44c" (UID: "2a804903-a2db-4a38-88e6-53e34f30b44c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.468664 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-scripts" (OuterVolumeSpecName: "scripts") pod "4e9963db-bd42-4b27-8fae-8d2f99a3db1e" (UID: "4e9963db-bd42-4b27-8fae-8d2f99a3db1e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.473582 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a804903-a2db-4a38-88e6-53e34f30b44c-kube-api-access-bvc8x" (OuterVolumeSpecName: "kube-api-access-bvc8x") pod "2a804903-a2db-4a38-88e6-53e34f30b44c" (UID: "2a804903-a2db-4a38-88e6-53e34f30b44c"). InnerVolumeSpecName "kube-api-access-bvc8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.474224 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2a804903-a2db-4a38-88e6-53e34f30b44c" (UID: "2a804903-a2db-4a38-88e6-53e34f30b44c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.478497 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-kube-api-access-8nmd2" (OuterVolumeSpecName: "kube-api-access-8nmd2") pod "4e9963db-bd42-4b27-8fae-8d2f99a3db1e" (UID: "4e9963db-bd42-4b27-8fae-8d2f99a3db1e"). InnerVolumeSpecName "kube-api-access-8nmd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.499350 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4e9963db-bd42-4b27-8fae-8d2f99a3db1e" (UID: "4e9963db-bd42-4b27-8fae-8d2f99a3db1e"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.499481 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7cc45cfb45-bbbq8"] Dec 11 10:13:25 crc kubenswrapper[4746]: W1211 10:13:25.511880 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c65e9de_7890_47aa_bcf7_48cdfd6dd262.slice/crio-60b4c088b4e9fde00ebbe71c30e52938287d80c8da8de81c539d59be4e5ac83d WatchSource:0}: Error finding container 60b4c088b4e9fde00ebbe71c30e52938287d80c8da8de81c539d59be4e5ac83d: Status 404 returned error can't find the container with id 60b4c088b4e9fde00ebbe71c30e52938287d80c8da8de81c539d59be4e5ac83d Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.549878 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a804903-a2db-4a38-88e6-53e34f30b44c" (UID: "2a804903-a2db-4a38-88e6-53e34f30b44c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.566285 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.566318 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.566328 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.566339 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvc8x\" (UniqueName: \"kubernetes.io/projected/2a804903-a2db-4a38-88e6-53e34f30b44c-kube-api-access-bvc8x\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.566351 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.566359 4746 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.566369 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nmd2\" (UniqueName: \"kubernetes.io/projected/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-kube-api-access-8nmd2\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.566377 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.585862 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e9963db-bd42-4b27-8fae-8d2f99a3db1e" (UID: "4e9963db-bd42-4b27-8fae-8d2f99a3db1e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.587197 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-config-data" (OuterVolumeSpecName: "config-data") pod "4e9963db-bd42-4b27-8fae-8d2f99a3db1e" (UID: "4e9963db-bd42-4b27-8fae-8d2f99a3db1e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.613421 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data" (OuterVolumeSpecName: "config-data") pod "2a804903-a2db-4a38-88e6-53e34f30b44c" (UID: "2a804903-a2db-4a38-88e6-53e34f30b44c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.669456 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a804903-a2db-4a38-88e6-53e34f30b44c-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.669997 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.670063 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e9963db-bd42-4b27-8fae-8d2f99a3db1e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.907686 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7b5f77766d-fffkr" event={"ID":"360eee01-6461-4d34-b011-888f1f1026ac","Type":"ContainerDied","Data":"de7b10a14d924b704eb49f917cef53f0dc1145afbff7a63b61744691cecc0bc1"} Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.907751 4746 scope.go:117] "RemoveContainer" containerID="ce0a3a416f1d2ac59317e18b6d063b00f2d2adc7c6f1428893f28e77dac86eed" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.907921 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7b5f77766d-fffkr" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.913286 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"f39575aa-fcfa-42ad-aceb-a8611602030f","Type":"ContainerStarted","Data":"b2d05fdde83fec0dc708cbe84b073f2010d665522ffe82004c28a17c7f3b7c8e"} Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.923946 4746 generic.go:334] "Generic (PLEG): container finished" podID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" containerID="ef73ad5277a8ae25b601d0a226efffd25c2e8af63b5f422d620e4e0ab40faa96" exitCode=143 Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.924031 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5","Type":"ContainerDied","Data":"ef73ad5277a8ae25b601d0a226efffd25c2e8af63b5f422d620e4e0ab40faa96"} Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.926965 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4e9963db-bd42-4b27-8fae-8d2f99a3db1e","Type":"ContainerDied","Data":"1a96300c66850e6ed4dc17b460034428162861261c6f2c5d7bd130f435dfd67f"} Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.927076 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.930102 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" event={"ID":"5c65e9de-7890-47aa-bcf7-48cdfd6dd262","Type":"ContainerStarted","Data":"60b4c088b4e9fde00ebbe71c30e52938287d80c8da8de81c539d59be4e5ac83d"} Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.930812 4746 scope.go:117] "RemoveContainer" containerID="b346cef92731e27d2fdf01205d4e416356b77666acb8fae6f8950a703ae5649a" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.941465 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"2a804903-a2db-4a38-88e6-53e34f30b44c","Type":"ContainerDied","Data":"0a4d1186c285a5aa7c0e55f0e823facc387e6f87159402db462457448a46c436"} Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.941564 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.944519 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.8920273930000002 podStartE2EDuration="17.94450024s" podCreationTimestamp="2025-12-11 10:13:08 +0000 UTC" firstStartedPulling="2025-12-11 10:13:09.717882456 +0000 UTC m=+1162.577745769" lastFinishedPulling="2025-12-11 10:13:24.770355303 +0000 UTC m=+1177.630218616" observedRunningTime="2025-12-11 10:13:25.93333656 +0000 UTC m=+1178.793199863" watchObservedRunningTime="2025-12-11 10:13:25.94450024 +0000 UTC m=+1178.804363553" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.963410 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7b5f77766d-fffkr"] Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.971014 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7b5f77766d-fffkr"] Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.974848 4746 scope.go:117] "RemoveContainer" containerID="b56ba54be38f89b4417ed65bcf05be89600ff0ef38aacab93c6c003a87879ee1" Dec 11 10:13:25 crc kubenswrapper[4746]: I1211 10:13:25.988830 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.008873 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.019474 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.023715 4746 scope.go:117] "RemoveContainer" containerID="d3d27e5f0a1561e3045273a1188d91024dfbde5478a126d5406f7abe115bd56e" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.031121 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.038437 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:26 crc kubenswrapper[4746]: E1211 10:13:26.038804 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360eee01-6461-4d34-b011-888f1f1026ac" containerName="barbican-api" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.038822 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="360eee01-6461-4d34-b011-888f1f1026ac" containerName="barbican-api" Dec 11 10:13:26 crc kubenswrapper[4746]: E1211 10:13:26.038837 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="360eee01-6461-4d34-b011-888f1f1026ac" containerName="barbican-api-log" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.038843 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="360eee01-6461-4d34-b011-888f1f1026ac" containerName="barbican-api-log" Dec 11 10:13:26 crc kubenswrapper[4746]: E1211 10:13:26.038856 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a804903-a2db-4a38-88e6-53e34f30b44c" containerName="probe" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.038863 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a804903-a2db-4a38-88e6-53e34f30b44c" containerName="probe" Dec 11 10:13:26 crc kubenswrapper[4746]: E1211 10:13:26.038877 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="proxy-httpd" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.038883 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="proxy-httpd" Dec 11 10:13:26 crc kubenswrapper[4746]: E1211 10:13:26.038898 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="sg-core" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.038906 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="sg-core" Dec 11 10:13:26 crc kubenswrapper[4746]: E1211 10:13:26.038934 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="ceilometer-central-agent" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.038941 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="ceilometer-central-agent" Dec 11 10:13:26 crc kubenswrapper[4746]: E1211 10:13:26.038957 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a804903-a2db-4a38-88e6-53e34f30b44c" containerName="cinder-scheduler" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.038964 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a804903-a2db-4a38-88e6-53e34f30b44c" containerName="cinder-scheduler" Dec 11 10:13:26 crc kubenswrapper[4746]: E1211 10:13:26.038973 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="ceilometer-notification-agent" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.038979 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="ceilometer-notification-agent" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.039193 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="sg-core" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.039209 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="proxy-httpd" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.039219 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="360eee01-6461-4d34-b011-888f1f1026ac" containerName="barbican-api-log" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.039235 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="360eee01-6461-4d34-b011-888f1f1026ac" containerName="barbican-api" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.039245 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="ceilometer-notification-agent" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.039258 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a804903-a2db-4a38-88e6-53e34f30b44c" containerName="probe" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.039270 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" containerName="ceilometer-central-agent" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.039287 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a804903-a2db-4a38-88e6-53e34f30b44c" containerName="cinder-scheduler" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.041205 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.045859 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.046158 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.049530 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.052806 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.055263 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.057513 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.064243 4746 scope.go:117] "RemoveContainer" containerID="45d1f378acb28195ced7fd82cf7ad8e025fe11e60d035a7b35cd8f4a3501783a" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.065350 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.102565 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw9p2\" (UniqueName: \"kubernetes.io/projected/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-kube-api-access-xw9p2\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.102639 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-config-data\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.102676 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.102767 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.102818 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-scripts\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.102851 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.104918 4746 scope.go:117] "RemoveContainer" containerID="6103cee71f225ddff31578e0fde397685cf2218d226db9588d3d48137f7bb27c" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.132470 4746 scope.go:117] "RemoveContainer" containerID="1e3396cf06dc080c3c4faa3009ae507ab0c10fb7366060c713ea7934d50b0e4d" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.157602 4746 scope.go:117] "RemoveContainer" containerID="2e7314ecd62125dc99a2adb7adb28e158e23f99db76431b6dd8fc1e7d6f6b464" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.157614 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-77c4bd4944-jbg88" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.145:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.145:8443: connect: connection refused" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.157737 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.204273 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-config-data\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.204395 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.204458 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-run-httpd\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.204560 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv95v\" (UniqueName: \"kubernetes.io/projected/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-kube-api-access-kv95v\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.204680 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.204815 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-scripts\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.204852 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-scripts\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.204894 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.204983 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.204975 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.205024 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw9p2\" (UniqueName: \"kubernetes.io/projected/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-kube-api-access-xw9p2\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.205087 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-log-httpd\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.205137 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-config-data\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.205200 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.210923 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-scripts\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.211509 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.215552 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.216157 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-config-data\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.222873 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw9p2\" (UniqueName: \"kubernetes.io/projected/663cb4b5-0c8f-4518-9ba3-1d34e8b1949a-kube-api-access-xw9p2\") pod \"cinder-scheduler-0\" (UID: \"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a\") " pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.306751 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-scripts\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.307030 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.307065 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-log-httpd\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.307137 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-config-data\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.307166 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.307214 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-run-httpd\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.307239 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kv95v\" (UniqueName: \"kubernetes.io/projected/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-kube-api-access-kv95v\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.307937 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-log-httpd\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.308142 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-run-httpd\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.310683 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.311991 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-config-data\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.312005 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-scripts\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.312604 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.326462 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv95v\" (UniqueName: \"kubernetes.io/projected/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-kube-api-access-kv95v\") pod \"ceilometer-0\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.374533 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.399223 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.980660 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" event={"ID":"5c65e9de-7890-47aa-bcf7-48cdfd6dd262","Type":"ContainerStarted","Data":"14dbacf782756542105a3d0cc3b80749dac0677e7b00dbaa48bec776e71b5a51"} Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.981449 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" event={"ID":"5c65e9de-7890-47aa-bcf7-48cdfd6dd262","Type":"ContainerStarted","Data":"837d74cf2d61569f4a4032cd3fb04b53e7999e162ad4d36fe3b48d9b26972ead"} Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.981472 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:26 crc kubenswrapper[4746]: I1211 10:13:26.983449 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:26 crc kubenswrapper[4746]: W1211 10:13:26.995612 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafe8e4b3_1a4f_45a2_8791_0ecced71da8d.slice/crio-ec1d0fa8904d331dfd5a9eee28a40b83694866f2feeb6a2f227239d243ff8e49 WatchSource:0}: Error finding container ec1d0fa8904d331dfd5a9eee28a40b83694866f2feeb6a2f227239d243ff8e49: Status 404 returned error can't find the container with id ec1d0fa8904d331dfd5a9eee28a40b83694866f2feeb6a2f227239d243ff8e49 Dec 11 10:13:27 crc kubenswrapper[4746]: I1211 10:13:27.023833 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" podStartSLOduration=11.023803091 podStartE2EDuration="11.023803091s" podCreationTimestamp="2025-12-11 10:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:27.00178595 +0000 UTC m=+1179.861649273" watchObservedRunningTime="2025-12-11 10:13:27.023803091 +0000 UTC m=+1179.883666404" Dec 11 10:13:27 crc kubenswrapper[4746]: I1211 10:13:27.086444 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 11 10:13:27 crc kubenswrapper[4746]: W1211 10:13:27.087853 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod663cb4b5_0c8f_4518_9ba3_1d34e8b1949a.slice/crio-66ca8f512832ba9d23622944d08ecb4ade752e8bbeb29b53ad025334ed953eb1 WatchSource:0}: Error finding container 66ca8f512832ba9d23622944d08ecb4ade752e8bbeb29b53ad025334ed953eb1: Status 404 returned error can't find the container with id 66ca8f512832ba9d23622944d08ecb4ade752e8bbeb29b53ad025334ed953eb1 Dec 11 10:13:27 crc kubenswrapper[4746]: I1211 10:13:27.215967 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:13:27 crc kubenswrapper[4746]: I1211 10:13:27.216544 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="53afe096-e47a-472d-a35a-a2da61b39aae" containerName="glance-log" containerID="cri-o://81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69" gracePeriod=30 Dec 11 10:13:27 crc kubenswrapper[4746]: I1211 10:13:27.217088 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="53afe096-e47a-472d-a35a-a2da61b39aae" containerName="glance-httpd" containerID="cri-o://f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9" gracePeriod=30 Dec 11 10:13:27 crc kubenswrapper[4746]: I1211 10:13:27.657559 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a804903-a2db-4a38-88e6-53e34f30b44c" path="/var/lib/kubelet/pods/2a804903-a2db-4a38-88e6-53e34f30b44c/volumes" Dec 11 10:13:27 crc kubenswrapper[4746]: I1211 10:13:27.659413 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="360eee01-6461-4d34-b011-888f1f1026ac" path="/var/lib/kubelet/pods/360eee01-6461-4d34-b011-888f1f1026ac/volumes" Dec 11 10:13:27 crc kubenswrapper[4746]: I1211 10:13:27.660219 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e9963db-bd42-4b27-8fae-8d2f99a3db1e" path="/var/lib/kubelet/pods/4e9963db-bd42-4b27-8fae-8d2f99a3db1e/volumes" Dec 11 10:13:28 crc kubenswrapper[4746]: I1211 10:13:28.000183 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a","Type":"ContainerStarted","Data":"66ca8f512832ba9d23622944d08ecb4ade752e8bbeb29b53ad025334ed953eb1"} Dec 11 10:13:28 crc kubenswrapper[4746]: I1211 10:13:28.006313 4746 generic.go:334] "Generic (PLEG): container finished" podID="53afe096-e47a-472d-a35a-a2da61b39aae" containerID="81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69" exitCode=143 Dec 11 10:13:28 crc kubenswrapper[4746]: I1211 10:13:28.006387 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53afe096-e47a-472d-a35a-a2da61b39aae","Type":"ContainerDied","Data":"81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69"} Dec 11 10:13:28 crc kubenswrapper[4746]: I1211 10:13:28.018397 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe8e4b3-1a4f-45a2-8791-0ecced71da8d","Type":"ContainerStarted","Data":"ec1d0fa8904d331dfd5a9eee28a40b83694866f2feeb6a2f227239d243ff8e49"} Dec 11 10:13:28 crc kubenswrapper[4746]: I1211 10:13:28.018511 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.007875 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.039939 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a","Type":"ContainerStarted","Data":"e61d5373a8a42fdca8a7efb605402661889660c57ab74790d5913993d7b75a62"} Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.039995 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"663cb4b5-0c8f-4518-9ba3-1d34e8b1949a","Type":"ContainerStarted","Data":"a8abbe0113b3d61c81b79afa91af6ad823feed5c6f78a6ccb0bfd455199da74b"} Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.046668 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe8e4b3-1a4f-45a2-8791-0ecced71da8d","Type":"ContainerStarted","Data":"eff0f96b36721c652fea903490a5f823b418d5fb482aa2edb66829d0d4dc8097"} Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.098961 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.09892333 podStartE2EDuration="4.09892333s" podCreationTimestamp="2025-12-11 10:13:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:29.058376561 +0000 UTC m=+1181.918239874" watchObservedRunningTime="2025-12-11 10:13:29.09892333 +0000 UTC m=+1181.958786643" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.137789 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5","Type":"ContainerDied","Data":"34026be8ede3ae84458859791cf2749046221b44264c909559b8e4bde022a7dd"} Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.140372 4746 generic.go:334] "Generic (PLEG): container finished" podID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" containerID="34026be8ede3ae84458859791cf2749046221b44264c909559b8e4bde022a7dd" exitCode=0 Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.330550 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.457445 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-combined-ca-bundle\") pod \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.457570 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-httpd-run\") pod \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.457614 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-scripts\") pod \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.457635 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-logs\") pod \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.457692 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.457735 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-config-data\") pod \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.458646 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-logs" (OuterVolumeSpecName: "logs") pod "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" (UID: "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.458664 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-777wf\" (UniqueName: \"kubernetes.io/projected/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-kube-api-access-777wf\") pod \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.458754 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-public-tls-certs\") pod \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\" (UID: \"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5\") " Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.458873 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" (UID: "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.460016 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.460041 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.466920 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-kube-api-access-777wf" (OuterVolumeSpecName: "kube-api-access-777wf") pod "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" (UID: "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5"). InnerVolumeSpecName "kube-api-access-777wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.468059 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" (UID: "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.481529 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-scripts" (OuterVolumeSpecName: "scripts") pod "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" (UID: "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.514336 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" (UID: "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.538161 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-config-data" (OuterVolumeSpecName: "config-data") pod "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" (UID: "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.539791 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" (UID: "a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.562594 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.562669 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.562687 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.562697 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-777wf\" (UniqueName: \"kubernetes.io/projected/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-kube-api-access-777wf\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.562710 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.562719 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.586728 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 11 10:13:29 crc kubenswrapper[4746]: I1211 10:13:29.667280 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.182101 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe8e4b3-1a4f-45a2-8791-0ecced71da8d","Type":"ContainerStarted","Data":"95ffe5d8e55b66377af5cf13d50e827164b0fcb73ecf6817d1016745f707a11b"} Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.191856 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.191851 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5","Type":"ContainerDied","Data":"ad52e4da709f649f4f7479bf2a93d47c52b0c34e8fd6940596e343ed6c08ee86"} Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.191931 4746 scope.go:117] "RemoveContainer" containerID="34026be8ede3ae84458859791cf2749046221b44264c909559b8e4bde022a7dd" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.225875 4746 scope.go:117] "RemoveContainer" containerID="ef73ad5277a8ae25b601d0a226efffd25c2e8af63b5f422d620e4e0ab40faa96" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.225948 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.240187 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.261402 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:13:30 crc kubenswrapper[4746]: E1211 10:13:30.261841 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" containerName="glance-log" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.261861 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" containerName="glance-log" Dec 11 10:13:30 crc kubenswrapper[4746]: E1211 10:13:30.261894 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" containerName="glance-httpd" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.261900 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" containerName="glance-httpd" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.262167 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" containerName="glance-httpd" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.262208 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" containerName="glance-log" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.263344 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.269177 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.269579 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.275743 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.384672 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-scripts\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.384726 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db60fce8-8218-4af0-84db-6bbbe7218d4f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.384749 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.384797 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.384840 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db60fce8-8218-4af0-84db-6bbbe7218d4f-logs\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.385772 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-config-data\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.385854 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.385923 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pqwd\" (UniqueName: \"kubernetes.io/projected/db60fce8-8218-4af0-84db-6bbbe7218d4f-kube-api-access-2pqwd\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.487238 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-scripts\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.487281 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db60fce8-8218-4af0-84db-6bbbe7218d4f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.487304 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.487339 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.487365 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db60fce8-8218-4af0-84db-6bbbe7218d4f-logs\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.487411 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-config-data\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.487442 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pqwd\" (UniqueName: \"kubernetes.io/projected/db60fce8-8218-4af0-84db-6bbbe7218d4f-kube-api-access-2pqwd\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.487459 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.487922 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/db60fce8-8218-4af0-84db-6bbbe7218d4f-logs\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.488350 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.488354 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/db60fce8-8218-4af0-84db-6bbbe7218d4f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.496063 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.497284 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.497594 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-scripts\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.500591 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db60fce8-8218-4af0-84db-6bbbe7218d4f-config-data\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.510880 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pqwd\" (UniqueName: \"kubernetes.io/projected/db60fce8-8218-4af0-84db-6bbbe7218d4f-kube-api-access-2pqwd\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.525387 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-0\" (UID: \"db60fce8-8218-4af0-84db-6bbbe7218d4f\") " pod="openstack/glance-default-external-api-0" Dec 11 10:13:30 crc kubenswrapper[4746]: I1211 10:13:30.585438 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.106119 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.350455 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-logs\") pod \"53afe096-e47a-472d-a35a-a2da61b39aae\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.350538 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-internal-tls-certs\") pod \"53afe096-e47a-472d-a35a-a2da61b39aae\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.350570 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-combined-ca-bundle\") pod \"53afe096-e47a-472d-a35a-a2da61b39aae\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.350642 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-httpd-run\") pod \"53afe096-e47a-472d-a35a-a2da61b39aae\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.350752 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"53afe096-e47a-472d-a35a-a2da61b39aae\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.350793 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh5vh\" (UniqueName: \"kubernetes.io/projected/53afe096-e47a-472d-a35a-a2da61b39aae-kube-api-access-xh5vh\") pod \"53afe096-e47a-472d-a35a-a2da61b39aae\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.350813 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-scripts\") pod \"53afe096-e47a-472d-a35a-a2da61b39aae\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.350888 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-config-data\") pod \"53afe096-e47a-472d-a35a-a2da61b39aae\" (UID: \"53afe096-e47a-472d-a35a-a2da61b39aae\") " Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.352293 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "53afe096-e47a-472d-a35a-a2da61b39aae" (UID: "53afe096-e47a-472d-a35a-a2da61b39aae"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.356228 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-logs" (OuterVolumeSpecName: "logs") pod "53afe096-e47a-472d-a35a-a2da61b39aae" (UID: "53afe096-e47a-472d-a35a-a2da61b39aae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.371351 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53afe096-e47a-472d-a35a-a2da61b39aae-kube-api-access-xh5vh" (OuterVolumeSpecName: "kube-api-access-xh5vh") pod "53afe096-e47a-472d-a35a-a2da61b39aae" (UID: "53afe096-e47a-472d-a35a-a2da61b39aae"). InnerVolumeSpecName "kube-api-access-xh5vh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.390622 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-scripts" (OuterVolumeSpecName: "scripts") pod "53afe096-e47a-472d-a35a-a2da61b39aae" (UID: "53afe096-e47a-472d-a35a-a2da61b39aae"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.400967 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.425252 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "53afe096-e47a-472d-a35a-a2da61b39aae" (UID: "53afe096-e47a-472d-a35a-a2da61b39aae"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.457096 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.457137 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.457151 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh5vh\" (UniqueName: \"kubernetes.io/projected/53afe096-e47a-472d-a35a-a2da61b39aae-kube-api-access-xh5vh\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.457167 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.457179 4746 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53afe096-e47a-472d-a35a-a2da61b39aae-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.488527 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.489343 4746 generic.go:334] "Generic (PLEG): container finished" podID="53afe096-e47a-472d-a35a-a2da61b39aae" containerID="f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9" exitCode=0 Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.489421 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53afe096-e47a-472d-a35a-a2da61b39aae","Type":"ContainerDied","Data":"f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9"} Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.489451 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"53afe096-e47a-472d-a35a-a2da61b39aae","Type":"ContainerDied","Data":"afbd8d88d7a49052d4cf0000b62afaa9fe198e3a7dfef8c307592d6ee2037c2d"} Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.489480 4746 scope.go:117] "RemoveContainer" containerID="f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.489573 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.509890 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.526935 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53afe096-e47a-472d-a35a-a2da61b39aae" (UID: "53afe096-e47a-472d-a35a-a2da61b39aae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.535000 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.535531 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "53afe096-e47a-472d-a35a-a2da61b39aae" (UID: "53afe096-e47a-472d-a35a-a2da61b39aae"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.560215 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.560253 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.560264 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.575321 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe8e4b3-1a4f-45a2-8791-0ecced71da8d","Type":"ContainerStarted","Data":"ea65f503e15c7dc59b15ecc074b823cf2a67bfbb1706d75a5979bb92e25a8e64"} Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.607242 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-config-data" (OuterVolumeSpecName: "config-data") pod "53afe096-e47a-472d-a35a-a2da61b39aae" (UID: "53afe096-e47a-472d-a35a-a2da61b39aae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.610271 4746 scope.go:117] "RemoveContainer" containerID="81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.704960 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5" path="/var/lib/kubelet/pods/a2c7b1d0-c00f-4c2f-b5cd-ca67ce430fe5/volumes" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.767445 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.767476 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53afe096-e47a-472d-a35a-a2da61b39aae-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.779269 4746 scope.go:117] "RemoveContainer" containerID="f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9" Dec 11 10:13:31 crc kubenswrapper[4746]: E1211 10:13:31.780818 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9\": container with ID starting with f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9 not found: ID does not exist" containerID="f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.780902 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9"} err="failed to get container status \"f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9\": rpc error: code = NotFound desc = could not find container \"f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9\": container with ID starting with f741f1603479b8c7f5f2cd061ecfeffa1f6c727cd936902a115644c9f0250bf9 not found: ID does not exist" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.780970 4746 scope.go:117] "RemoveContainer" containerID="81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69" Dec 11 10:13:31 crc kubenswrapper[4746]: E1211 10:13:31.791057 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69\": container with ID starting with 81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69 not found: ID does not exist" containerID="81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.791128 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69"} err="failed to get container status \"81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69\": rpc error: code = NotFound desc = could not find container \"81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69\": container with ID starting with 81eee28e1d469e3f5791642c67cea0198e21894b3d32e03dde81f523f9f19a69 not found: ID does not exist" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.848555 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.885384 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.907264 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:13:31 crc kubenswrapper[4746]: E1211 10:13:31.908021 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53afe096-e47a-472d-a35a-a2da61b39aae" containerName="glance-httpd" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.908070 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="53afe096-e47a-472d-a35a-a2da61b39aae" containerName="glance-httpd" Dec 11 10:13:31 crc kubenswrapper[4746]: E1211 10:13:31.908123 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53afe096-e47a-472d-a35a-a2da61b39aae" containerName="glance-log" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.908133 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="53afe096-e47a-472d-a35a-a2da61b39aae" containerName="glance-log" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.908398 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="53afe096-e47a-472d-a35a-a2da61b39aae" containerName="glance-log" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.908421 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="53afe096-e47a-472d-a35a-a2da61b39aae" containerName="glance-httpd" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.909883 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.918503 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.918672 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 11 10:13:31 crc kubenswrapper[4746]: I1211 10:13:31.918719 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.076584 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.076873 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.076917 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.077121 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.077327 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpz58\" (UniqueName: \"kubernetes.io/projected/805456fb-d8e0-4341-b5ad-93906c3ad0e5-kube-api-access-wpz58\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.077509 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/805456fb-d8e0-4341-b5ad-93906c3ad0e5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.077566 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.077681 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/805456fb-d8e0-4341-b5ad-93906c3ad0e5-logs\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.180182 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.180329 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.180374 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.180451 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.180541 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpz58\" (UniqueName: \"kubernetes.io/projected/805456fb-d8e0-4341-b5ad-93906c3ad0e5-kube-api-access-wpz58\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.180627 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/805456fb-d8e0-4341-b5ad-93906c3ad0e5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.180664 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.180729 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/805456fb-d8e0-4341-b5ad-93906c3ad0e5-logs\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.181466 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/805456fb-d8e0-4341-b5ad-93906c3ad0e5-logs\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.184272 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.188192 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/805456fb-d8e0-4341-b5ad-93906c3ad0e5-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.190057 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.192913 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-config-data\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.195580 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-scripts\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.195808 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/805456fb-d8e0-4341-b5ad-93906c3ad0e5-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.210963 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpz58\" (UniqueName: \"kubernetes.io/projected/805456fb-d8e0-4341-b5ad-93906c3ad0e5-kube-api-access-wpz58\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.232067 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"805456fb-d8e0-4341-b5ad-93906c3ad0e5\") " pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.246849 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.624865 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"db60fce8-8218-4af0-84db-6bbbe7218d4f","Type":"ContainerStarted","Data":"a4c84b4463cb1752b7636c10725efb5230dbdd12eed9f39784cf6f898144796f"} Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.625163 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"db60fce8-8218-4af0-84db-6bbbe7218d4f","Type":"ContainerStarted","Data":"de28a88b480f337d607181883b42f001abda9228d42b66d8e2f818d76db7bdd3"} Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.633410 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="ceilometer-central-agent" containerID="cri-o://eff0f96b36721c652fea903490a5f823b418d5fb482aa2edb66829d0d4dc8097" gracePeriod=30 Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.633734 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe8e4b3-1a4f-45a2-8791-0ecced71da8d","Type":"ContainerStarted","Data":"d67e949151c2ad9dc673179efeb8fa184e625f26a555e915284605193940953e"} Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.633782 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.634123 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="proxy-httpd" containerID="cri-o://d67e949151c2ad9dc673179efeb8fa184e625f26a555e915284605193940953e" gracePeriod=30 Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.634182 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="sg-core" containerID="cri-o://ea65f503e15c7dc59b15ecc074b823cf2a67bfbb1706d75a5979bb92e25a8e64" gracePeriod=30 Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.634233 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="ceilometer-notification-agent" containerID="cri-o://95ffe5d8e55b66377af5cf13d50e827164b0fcb73ecf6817d1016745f707a11b" gracePeriod=30 Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.663741 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.624752327 podStartE2EDuration="7.663717519s" podCreationTimestamp="2025-12-11 10:13:25 +0000 UTC" firstStartedPulling="2025-12-11 10:13:27.006395914 +0000 UTC m=+1179.866259237" lastFinishedPulling="2025-12-11 10:13:32.045361106 +0000 UTC m=+1184.905224429" observedRunningTime="2025-12-11 10:13:32.658490159 +0000 UTC m=+1185.518353472" watchObservedRunningTime="2025-12-11 10:13:32.663717519 +0000 UTC m=+1185.523580832" Dec 11 10:13:32 crc kubenswrapper[4746]: I1211 10:13:32.915910 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 11 10:13:32 crc kubenswrapper[4746]: W1211 10:13:32.935681 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod805456fb_d8e0_4341_b5ad_93906c3ad0e5.slice/crio-e9def140deafa748cb0342bf17e2331280099a3fd0ae9f9b15e7fd06e13500ca WatchSource:0}: Error finding container e9def140deafa748cb0342bf17e2331280099a3fd0ae9f9b15e7fd06e13500ca: Status 404 returned error can't find the container with id e9def140deafa748cb0342bf17e2331280099a3fd0ae9f9b15e7fd06e13500ca Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.651095 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53afe096-e47a-472d-a35a-a2da61b39aae" path="/var/lib/kubelet/pods/53afe096-e47a-472d-a35a-a2da61b39aae/volumes" Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.662626 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"db60fce8-8218-4af0-84db-6bbbe7218d4f","Type":"ContainerStarted","Data":"e1ea15bce5911b9109b39c76ba62e3d872caf72b115ffb32943bf77d7520e041"} Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.666349 4746 generic.go:334] "Generic (PLEG): container finished" podID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerID="d67e949151c2ad9dc673179efeb8fa184e625f26a555e915284605193940953e" exitCode=0 Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.666379 4746 generic.go:334] "Generic (PLEG): container finished" podID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerID="ea65f503e15c7dc59b15ecc074b823cf2a67bfbb1706d75a5979bb92e25a8e64" exitCode=2 Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.666389 4746 generic.go:334] "Generic (PLEG): container finished" podID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerID="95ffe5d8e55b66377af5cf13d50e827164b0fcb73ecf6817d1016745f707a11b" exitCode=0 Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.666437 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe8e4b3-1a4f-45a2-8791-0ecced71da8d","Type":"ContainerDied","Data":"d67e949151c2ad9dc673179efeb8fa184e625f26a555e915284605193940953e"} Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.666458 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe8e4b3-1a4f-45a2-8791-0ecced71da8d","Type":"ContainerDied","Data":"ea65f503e15c7dc59b15ecc074b823cf2a67bfbb1706d75a5979bb92e25a8e64"} Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.666472 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe8e4b3-1a4f-45a2-8791-0ecced71da8d","Type":"ContainerDied","Data":"95ffe5d8e55b66377af5cf13d50e827164b0fcb73ecf6817d1016745f707a11b"} Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.671821 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"805456fb-d8e0-4341-b5ad-93906c3ad0e5","Type":"ContainerStarted","Data":"efb9ad0ed41eb49f2acdfd0f470c20d662b92a2658253570e08a8d6857c9456d"} Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.671858 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"805456fb-d8e0-4341-b5ad-93906c3ad0e5","Type":"ContainerStarted","Data":"e9def140deafa748cb0342bf17e2331280099a3fd0ae9f9b15e7fd06e13500ca"} Dec 11 10:13:33 crc kubenswrapper[4746]: I1211 10:13:33.688419 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.688392423 podStartE2EDuration="3.688392423s" podCreationTimestamp="2025-12-11 10:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:33.685098495 +0000 UTC m=+1186.544961808" watchObservedRunningTime="2025-12-11 10:13:33.688392423 +0000 UTC m=+1186.548255736" Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.705718 4746 generic.go:334] "Generic (PLEG): container finished" podID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerID="98ac9bd0e996c24ef47fc1358f225bf0638d3c2eb7ec7974d3c9abeb6a0901fa" exitCode=137 Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.705875 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c4bd4944-jbg88" event={"ID":"016a98db-33b2-4acb-a360-5e8a55aebd6c","Type":"ContainerDied","Data":"98ac9bd0e996c24ef47fc1358f225bf0638d3c2eb7ec7974d3c9abeb6a0901fa"} Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.753174 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.753152763 podStartE2EDuration="3.753152763s" podCreationTimestamp="2025-12-11 10:13:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:34.740611406 +0000 UTC m=+1187.600474729" watchObservedRunningTime="2025-12-11 10:13:34.753152763 +0000 UTC m=+1187.613016076" Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.845103 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.967485 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/016a98db-33b2-4acb-a360-5e8a55aebd6c-logs\") pod \"016a98db-33b2-4acb-a360-5e8a55aebd6c\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.967543 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-scripts\") pod \"016a98db-33b2-4acb-a360-5e8a55aebd6c\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.967568 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-tls-certs\") pod \"016a98db-33b2-4acb-a360-5e8a55aebd6c\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.967670 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-combined-ca-bundle\") pod \"016a98db-33b2-4acb-a360-5e8a55aebd6c\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.967754 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhhxk\" (UniqueName: \"kubernetes.io/projected/016a98db-33b2-4acb-a360-5e8a55aebd6c-kube-api-access-xhhxk\") pod \"016a98db-33b2-4acb-a360-5e8a55aebd6c\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.968077 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-config-data\") pod \"016a98db-33b2-4acb-a360-5e8a55aebd6c\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.968114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-secret-key\") pod \"016a98db-33b2-4acb-a360-5e8a55aebd6c\" (UID: \"016a98db-33b2-4acb-a360-5e8a55aebd6c\") " Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.969731 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/016a98db-33b2-4acb-a360-5e8a55aebd6c-logs" (OuterVolumeSpecName: "logs") pod "016a98db-33b2-4acb-a360-5e8a55aebd6c" (UID: "016a98db-33b2-4acb-a360-5e8a55aebd6c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.969958 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/016a98db-33b2-4acb-a360-5e8a55aebd6c-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.976358 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016a98db-33b2-4acb-a360-5e8a55aebd6c-kube-api-access-xhhxk" (OuterVolumeSpecName: "kube-api-access-xhhxk") pod "016a98db-33b2-4acb-a360-5e8a55aebd6c" (UID: "016a98db-33b2-4acb-a360-5e8a55aebd6c"). InnerVolumeSpecName "kube-api-access-xhhxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:34 crc kubenswrapper[4746]: I1211 10:13:34.976522 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "016a98db-33b2-4acb-a360-5e8a55aebd6c" (UID: "016a98db-33b2-4acb-a360-5e8a55aebd6c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.010171 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "016a98db-33b2-4acb-a360-5e8a55aebd6c" (UID: "016a98db-33b2-4acb-a360-5e8a55aebd6c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.018032 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-scripts" (OuterVolumeSpecName: "scripts") pod "016a98db-33b2-4acb-a360-5e8a55aebd6c" (UID: "016a98db-33b2-4acb-a360-5e8a55aebd6c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.039628 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-config-data" (OuterVolumeSpecName: "config-data") pod "016a98db-33b2-4acb-a360-5e8a55aebd6c" (UID: "016a98db-33b2-4acb-a360-5e8a55aebd6c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.060171 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "016a98db-33b2-4acb-a360-5e8a55aebd6c" (UID: "016a98db-33b2-4acb-a360-5e8a55aebd6c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.072015 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.072069 4746 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.072086 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/016a98db-33b2-4acb-a360-5e8a55aebd6c-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.072096 4746 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.072106 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/016a98db-33b2-4acb-a360-5e8a55aebd6c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.072115 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhhxk\" (UniqueName: \"kubernetes.io/projected/016a98db-33b2-4acb-a360-5e8a55aebd6c-kube-api-access-xhhxk\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.731341 4746 generic.go:334] "Generic (PLEG): container finished" podID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerID="eff0f96b36721c652fea903490a5f823b418d5fb482aa2edb66829d0d4dc8097" exitCode=0 Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.731456 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe8e4b3-1a4f-45a2-8791-0ecced71da8d","Type":"ContainerDied","Data":"eff0f96b36721c652fea903490a5f823b418d5fb482aa2edb66829d0d4dc8097"} Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.735846 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-77c4bd4944-jbg88" event={"ID":"016a98db-33b2-4acb-a360-5e8a55aebd6c","Type":"ContainerDied","Data":"2f8739faf813bee05e2a0db5437afeff3300e2b8a6f2aa2eb36c5b117d2ccf4d"} Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.735896 4746 scope.go:117] "RemoveContainer" containerID="453c2a84853f09eb6406ce82bcc4f8dd06a8ac4f033e57a7416674cbed96eb48" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.737737 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-77c4bd4944-jbg88" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.739532 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"805456fb-d8e0-4341-b5ad-93906c3ad0e5","Type":"ContainerStarted","Data":"cefbb4b21f30a813a76a938176135898f62d5978630b4495a4659c133dca4d27"} Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.745977 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.764303 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-77c4bd4944-jbg88"] Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.773755 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-77c4bd4944-jbg88"] Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.887480 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-config-data\") pod \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.887566 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-run-httpd\") pod \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.887649 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-combined-ca-bundle\") pod \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.887730 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv95v\" (UniqueName: \"kubernetes.io/projected/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-kube-api-access-kv95v\") pod \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.887815 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-log-httpd\") pod \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.887880 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-sg-core-conf-yaml\") pod \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.887932 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-scripts\") pod \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\" (UID: \"afe8e4b3-1a4f-45a2-8791-0ecced71da8d\") " Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.888304 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "afe8e4b3-1a4f-45a2-8791-0ecced71da8d" (UID: "afe8e4b3-1a4f-45a2-8791-0ecced71da8d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.890777 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "afe8e4b3-1a4f-45a2-8791-0ecced71da8d" (UID: "afe8e4b3-1a4f-45a2-8791-0ecced71da8d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.894556 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-scripts" (OuterVolumeSpecName: "scripts") pod "afe8e4b3-1a4f-45a2-8791-0ecced71da8d" (UID: "afe8e4b3-1a4f-45a2-8791-0ecced71da8d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.895546 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-kube-api-access-kv95v" (OuterVolumeSpecName: "kube-api-access-kv95v") pod "afe8e4b3-1a4f-45a2-8791-0ecced71da8d" (UID: "afe8e4b3-1a4f-45a2-8791-0ecced71da8d"). InnerVolumeSpecName "kube-api-access-kv95v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.918398 4746 scope.go:117] "RemoveContainer" containerID="98ac9bd0e996c24ef47fc1358f225bf0638d3c2eb7ec7974d3c9abeb6a0901fa" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.933836 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "afe8e4b3-1a4f-45a2-8791-0ecced71da8d" (UID: "afe8e4b3-1a4f-45a2-8791-0ecced71da8d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.975725 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "afe8e4b3-1a4f-45a2-8791-0ecced71da8d" (UID: "afe8e4b3-1a4f-45a2-8791-0ecced71da8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.990527 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kv95v\" (UniqueName: \"kubernetes.io/projected/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-kube-api-access-kv95v\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.990558 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.990567 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.990575 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.990585 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.990593 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:35 crc kubenswrapper[4746]: I1211 10:13:35.997000 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-config-data" (OuterVolumeSpecName: "config-data") pod "afe8e4b3-1a4f-45a2-8791-0ecced71da8d" (UID: "afe8e4b3-1a4f-45a2-8791-0ecced71da8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.123258 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/afe8e4b3-1a4f-45a2-8791-0ecced71da8d-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.650012 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.753077 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"afe8e4b3-1a4f-45a2-8791-0ecced71da8d","Type":"ContainerDied","Data":"ec1d0fa8904d331dfd5a9eee28a40b83694866f2feeb6a2f227239d243ff8e49"} Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.753111 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.753216 4746 scope.go:117] "RemoveContainer" containerID="d67e949151c2ad9dc673179efeb8fa184e625f26a555e915284605193940953e" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.774522 4746 scope.go:117] "RemoveContainer" containerID="ea65f503e15c7dc59b15ecc074b823cf2a67bfbb1706d75a5979bb92e25a8e64" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.801397 4746 scope.go:117] "RemoveContainer" containerID="95ffe5d8e55b66377af5cf13d50e827164b0fcb73ecf6817d1016745f707a11b" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.802341 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.813462 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.853415 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:36 crc kubenswrapper[4746]: E1211 10:13:36.854244 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="ceilometer-notification-agent" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854276 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="ceilometer-notification-agent" Dec 11 10:13:36 crc kubenswrapper[4746]: E1211 10:13:36.854323 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="ceilometer-central-agent" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854334 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="ceilometer-central-agent" Dec 11 10:13:36 crc kubenswrapper[4746]: E1211 10:13:36.854366 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="sg-core" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854374 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="sg-core" Dec 11 10:13:36 crc kubenswrapper[4746]: E1211 10:13:36.854385 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="proxy-httpd" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854391 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="proxy-httpd" Dec 11 10:13:36 crc kubenswrapper[4746]: E1211 10:13:36.854401 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon-log" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854407 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon-log" Dec 11 10:13:36 crc kubenswrapper[4746]: E1211 10:13:36.854430 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854437 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854658 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="ceilometer-notification-agent" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854684 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="ceilometer-central-agent" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854695 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="proxy-httpd" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854705 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon-log" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854719 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" containerName="horizon" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.854737 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" containerName="sg-core" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.856946 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.864679 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.864930 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.871930 4746 scope.go:117] "RemoveContainer" containerID="eff0f96b36721c652fea903490a5f823b418d5fb482aa2edb66829d0d4dc8097" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.884845 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.912485 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-vxrd2"] Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.914547 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vxrd2" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.953876 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-scripts\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.953918 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.954439 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.954507 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hnlh\" (UniqueName: \"kubernetes.io/projected/041f7301-c875-4ebf-a917-6462c50316ce-kube-api-access-4hnlh\") pod \"nova-api-db-create-vxrd2\" (UID: \"041f7301-c875-4ebf-a917-6462c50316ce\") " pod="openstack/nova-api-db-create-vxrd2" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.954609 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-run-httpd\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.954639 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfsm8\" (UniqueName: \"kubernetes.io/projected/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-kube-api-access-hfsm8\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.954699 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-config-data\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.954754 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-log-httpd\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.954857 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041f7301-c875-4ebf-a917-6462c50316ce-operator-scripts\") pod \"nova-api-db-create-vxrd2\" (UID: \"041f7301-c875-4ebf-a917-6462c50316ce\") " pod="openstack/nova-api-db-create-vxrd2" Dec 11 10:13:36 crc kubenswrapper[4746]: I1211 10:13:36.976181 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vxrd2"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.020096 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-757q5"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.021352 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-757q5" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.033783 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-757q5"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.056631 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkzb8\" (UniqueName: \"kubernetes.io/projected/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-kube-api-access-tkzb8\") pod \"nova-cell0-db-create-757q5\" (UID: \"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a\") " pod="openstack/nova-cell0-db-create-757q5" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.056720 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-scripts\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.056747 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.056812 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-operator-scripts\") pod \"nova-cell0-db-create-757q5\" (UID: \"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a\") " pod="openstack/nova-cell0-db-create-757q5" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.056854 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.056875 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hnlh\" (UniqueName: \"kubernetes.io/projected/041f7301-c875-4ebf-a917-6462c50316ce-kube-api-access-4hnlh\") pod \"nova-api-db-create-vxrd2\" (UID: \"041f7301-c875-4ebf-a917-6462c50316ce\") " pod="openstack/nova-api-db-create-vxrd2" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.056911 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-run-httpd\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.056928 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfsm8\" (UniqueName: \"kubernetes.io/projected/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-kube-api-access-hfsm8\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.056953 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-config-data\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.056976 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-log-httpd\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.057008 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041f7301-c875-4ebf-a917-6462c50316ce-operator-scripts\") pod \"nova-api-db-create-vxrd2\" (UID: \"041f7301-c875-4ebf-a917-6462c50316ce\") " pod="openstack/nova-api-db-create-vxrd2" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.057677 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041f7301-c875-4ebf-a917-6462c50316ce-operator-scripts\") pod \"nova-api-db-create-vxrd2\" (UID: \"041f7301-c875-4ebf-a917-6462c50316ce\") " pod="openstack/nova-api-db-create-vxrd2" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.058356 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-run-httpd\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.059146 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-log-httpd\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.062756 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.062999 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-scripts\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.064268 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.085705 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-config-data\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.094234 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfsm8\" (UniqueName: \"kubernetes.io/projected/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-kube-api-access-hfsm8\") pod \"ceilometer-0\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.103518 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hnlh\" (UniqueName: \"kubernetes.io/projected/041f7301-c875-4ebf-a917-6462c50316ce-kube-api-access-4hnlh\") pod \"nova-api-db-create-vxrd2\" (UID: \"041f7301-c875-4ebf-a917-6462c50316ce\") " pod="openstack/nova-api-db-create-vxrd2" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.094308 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3ba3-account-create-update-lztww"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.141580 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3ba3-account-create-update-lztww" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.145134 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.180072 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkzb8\" (UniqueName: \"kubernetes.io/projected/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-kube-api-access-tkzb8\") pod \"nova-cell0-db-create-757q5\" (UID: \"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a\") " pod="openstack/nova-cell0-db-create-757q5" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.180116 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km2xx\" (UniqueName: \"kubernetes.io/projected/fb138237-45b8-4bd9-a20d-0125fbac9770-kube-api-access-km2xx\") pod \"nova-api-3ba3-account-create-update-lztww\" (UID: \"fb138237-45b8-4bd9-a20d-0125fbac9770\") " pod="openstack/nova-api-3ba3-account-create-update-lztww" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.180148 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb138237-45b8-4bd9-a20d-0125fbac9770-operator-scripts\") pod \"nova-api-3ba3-account-create-update-lztww\" (UID: \"fb138237-45b8-4bd9-a20d-0125fbac9770\") " pod="openstack/nova-api-3ba3-account-create-update-lztww" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.180359 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-operator-scripts\") pod \"nova-cell0-db-create-757q5\" (UID: \"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a\") " pod="openstack/nova-cell0-db-create-757q5" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.181240 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-operator-scripts\") pod \"nova-cell0-db-create-757q5\" (UID: \"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a\") " pod="openstack/nova-cell0-db-create-757q5" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.183270 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3ba3-account-create-update-lztww"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.197459 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-pbrhh"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.199060 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pbrhh" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.207885 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pbrhh"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.208400 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkzb8\" (UniqueName: \"kubernetes.io/projected/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-kube-api-access-tkzb8\") pod \"nova-cell0-db-create-757q5\" (UID: \"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a\") " pod="openstack/nova-cell0-db-create-757q5" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.222853 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.258969 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vxrd2" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.283793 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19ff8d66-0de2-4a8f-977a-810857fc5103-operator-scripts\") pod \"nova-cell1-db-create-pbrhh\" (UID: \"19ff8d66-0de2-4a8f-977a-810857fc5103\") " pod="openstack/nova-cell1-db-create-pbrhh" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.283928 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-km2xx\" (UniqueName: \"kubernetes.io/projected/fb138237-45b8-4bd9-a20d-0125fbac9770-kube-api-access-km2xx\") pod \"nova-api-3ba3-account-create-update-lztww\" (UID: \"fb138237-45b8-4bd9-a20d-0125fbac9770\") " pod="openstack/nova-api-3ba3-account-create-update-lztww" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.283976 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb138237-45b8-4bd9-a20d-0125fbac9770-operator-scripts\") pod \"nova-api-3ba3-account-create-update-lztww\" (UID: \"fb138237-45b8-4bd9-a20d-0125fbac9770\") " pod="openstack/nova-api-3ba3-account-create-update-lztww" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.284973 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb138237-45b8-4bd9-a20d-0125fbac9770-operator-scripts\") pod \"nova-api-3ba3-account-create-update-lztww\" (UID: \"fb138237-45b8-4bd9-a20d-0125fbac9770\") " pod="openstack/nova-api-3ba3-account-create-update-lztww" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.285022 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhlqd\" (UniqueName: \"kubernetes.io/projected/19ff8d66-0de2-4a8f-977a-810857fc5103-kube-api-access-zhlqd\") pod \"nova-cell1-db-create-pbrhh\" (UID: \"19ff8d66-0de2-4a8f-977a-810857fc5103\") " pod="openstack/nova-cell1-db-create-pbrhh" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.320428 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-km2xx\" (UniqueName: \"kubernetes.io/projected/fb138237-45b8-4bd9-a20d-0125fbac9770-kube-api-access-km2xx\") pod \"nova-api-3ba3-account-create-update-lztww\" (UID: \"fb138237-45b8-4bd9-a20d-0125fbac9770\") " pod="openstack/nova-api-3ba3-account-create-update-lztww" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.326488 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-626c-account-create-update-46ncn"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.327884 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-626c-account-create-update-46ncn" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.335322 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-626c-account-create-update-46ncn"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.339441 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.342022 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-757q5" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.393022 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhlqd\" (UniqueName: \"kubernetes.io/projected/19ff8d66-0de2-4a8f-977a-810857fc5103-kube-api-access-zhlqd\") pod \"nova-cell1-db-create-pbrhh\" (UID: \"19ff8d66-0de2-4a8f-977a-810857fc5103\") " pod="openstack/nova-cell1-db-create-pbrhh" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.393101 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f531d95a-9fa0-4897-bdea-ee3e43914203-operator-scripts\") pod \"nova-cell0-626c-account-create-update-46ncn\" (UID: \"f531d95a-9fa0-4897-bdea-ee3e43914203\") " pod="openstack/nova-cell0-626c-account-create-update-46ncn" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.393222 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v82q2\" (UniqueName: \"kubernetes.io/projected/f531d95a-9fa0-4897-bdea-ee3e43914203-kube-api-access-v82q2\") pod \"nova-cell0-626c-account-create-update-46ncn\" (UID: \"f531d95a-9fa0-4897-bdea-ee3e43914203\") " pod="openstack/nova-cell0-626c-account-create-update-46ncn" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.393301 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19ff8d66-0de2-4a8f-977a-810857fc5103-operator-scripts\") pod \"nova-cell1-db-create-pbrhh\" (UID: \"19ff8d66-0de2-4a8f-977a-810857fc5103\") " pod="openstack/nova-cell1-db-create-pbrhh" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.408365 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19ff8d66-0de2-4a8f-977a-810857fc5103-operator-scripts\") pod \"nova-cell1-db-create-pbrhh\" (UID: \"19ff8d66-0de2-4a8f-977a-810857fc5103\") " pod="openstack/nova-cell1-db-create-pbrhh" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.433702 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhlqd\" (UniqueName: \"kubernetes.io/projected/19ff8d66-0de2-4a8f-977a-810857fc5103-kube-api-access-zhlqd\") pod \"nova-cell1-db-create-pbrhh\" (UID: \"19ff8d66-0de2-4a8f-977a-810857fc5103\") " pod="openstack/nova-cell1-db-create-pbrhh" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.494678 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v82q2\" (UniqueName: \"kubernetes.io/projected/f531d95a-9fa0-4897-bdea-ee3e43914203-kube-api-access-v82q2\") pod \"nova-cell0-626c-account-create-update-46ncn\" (UID: \"f531d95a-9fa0-4897-bdea-ee3e43914203\") " pod="openstack/nova-cell0-626c-account-create-update-46ncn" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.494817 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f531d95a-9fa0-4897-bdea-ee3e43914203-operator-scripts\") pod \"nova-cell0-626c-account-create-update-46ncn\" (UID: \"f531d95a-9fa0-4897-bdea-ee3e43914203\") " pod="openstack/nova-cell0-626c-account-create-update-46ncn" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.495651 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f531d95a-9fa0-4897-bdea-ee3e43914203-operator-scripts\") pod \"nova-cell0-626c-account-create-update-46ncn\" (UID: \"f531d95a-9fa0-4897-bdea-ee3e43914203\") " pod="openstack/nova-cell0-626c-account-create-update-46ncn" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.496098 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3ba3-account-create-update-lztww" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.554823 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v82q2\" (UniqueName: \"kubernetes.io/projected/f531d95a-9fa0-4897-bdea-ee3e43914203-kube-api-access-v82q2\") pod \"nova-cell0-626c-account-create-update-46ncn\" (UID: \"f531d95a-9fa0-4897-bdea-ee3e43914203\") " pod="openstack/nova-cell0-626c-account-create-update-46ncn" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.555393 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-2f33-account-create-update-4plh6"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.557365 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2f33-account-create-update-4plh6" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.561397 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.571071 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2f33-account-create-update-4plh6"] Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.682035 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="016a98db-33b2-4acb-a360-5e8a55aebd6c" path="/var/lib/kubelet/pods/016a98db-33b2-4acb-a360-5e8a55aebd6c/volumes" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.685725 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pbrhh" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.686306 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe8e4b3-1a4f-45a2-8791-0ecced71da8d" path="/var/lib/kubelet/pods/afe8e4b3-1a4f-45a2-8791-0ecced71da8d/volumes" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.700023 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn8zz\" (UniqueName: \"kubernetes.io/projected/da4dce39-8df9-4243-9c8e-268f8f662c97-kube-api-access-pn8zz\") pod \"nova-cell1-2f33-account-create-update-4plh6\" (UID: \"da4dce39-8df9-4243-9c8e-268f8f662c97\") " pod="openstack/nova-cell1-2f33-account-create-update-4plh6" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.701273 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4dce39-8df9-4243-9c8e-268f8f662c97-operator-scripts\") pod \"nova-cell1-2f33-account-create-update-4plh6\" (UID: \"da4dce39-8df9-4243-9c8e-268f8f662c97\") " pod="openstack/nova-cell1-2f33-account-create-update-4plh6" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.709358 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-626c-account-create-update-46ncn" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.805073 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn8zz\" (UniqueName: \"kubernetes.io/projected/da4dce39-8df9-4243-9c8e-268f8f662c97-kube-api-access-pn8zz\") pod \"nova-cell1-2f33-account-create-update-4plh6\" (UID: \"da4dce39-8df9-4243-9c8e-268f8f662c97\") " pod="openstack/nova-cell1-2f33-account-create-update-4plh6" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.805229 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4dce39-8df9-4243-9c8e-268f8f662c97-operator-scripts\") pod \"nova-cell1-2f33-account-create-update-4plh6\" (UID: \"da4dce39-8df9-4243-9c8e-268f8f662c97\") " pod="openstack/nova-cell1-2f33-account-create-update-4plh6" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.808022 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4dce39-8df9-4243-9c8e-268f8f662c97-operator-scripts\") pod \"nova-cell1-2f33-account-create-update-4plh6\" (UID: \"da4dce39-8df9-4243-9c8e-268f8f662c97\") " pod="openstack/nova-cell1-2f33-account-create-update-4plh6" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.837726 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn8zz\" (UniqueName: \"kubernetes.io/projected/da4dce39-8df9-4243-9c8e-268f8f662c97-kube-api-access-pn8zz\") pod \"nova-cell1-2f33-account-create-update-4plh6\" (UID: \"da4dce39-8df9-4243-9c8e-268f8f662c97\") " pod="openstack/nova-cell1-2f33-account-create-update-4plh6" Dec 11 10:13:37 crc kubenswrapper[4746]: I1211 10:13:37.899656 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2f33-account-create-update-4plh6" Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.125638 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-vxrd2"] Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.190531 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-757q5"] Dec 11 10:13:38 crc kubenswrapper[4746]: W1211 10:13:38.199985 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8bce084_6da4_4c08_804f_1dd5d7cfdd8a.slice/crio-dae7d363b4f3c2274517a704d1d725199965e62d9a288850e6ef63901e8549ab WatchSource:0}: Error finding container dae7d363b4f3c2274517a704d1d725199965e62d9a288850e6ef63901e8549ab: Status 404 returned error can't find the container with id dae7d363b4f3c2274517a704d1d725199965e62d9a288850e6ef63901e8549ab Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.221563 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.595140 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3ba3-account-create-update-lztww"] Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.829429 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-757q5" event={"ID":"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a","Type":"ContainerStarted","Data":"f7f3521ec299e4cfbd209fb3e292113d40932640b9bd11742f7c25d17eeecd8b"} Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.829506 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-757q5" event={"ID":"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a","Type":"ContainerStarted","Data":"dae7d363b4f3c2274517a704d1d725199965e62d9a288850e6ef63901e8549ab"} Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.839598 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e24f590f-4ee4-47c4-b3bb-89279ed4acbf","Type":"ContainerStarted","Data":"d21496c8548eee5d15112e4c54914ba761555677cc1ecbb6e02ee3434c6dff93"} Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.846774 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3ba3-account-create-update-lztww" event={"ID":"fb138237-45b8-4bd9-a20d-0125fbac9770","Type":"ContainerStarted","Data":"7b2f4b94d021de474349f15ce77624caad932c39dc689b0e938ebf01129a9e86"} Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.853084 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-757q5" podStartSLOduration=2.85303463 podStartE2EDuration="2.85303463s" podCreationTimestamp="2025-12-11 10:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:38.850205244 +0000 UTC m=+1191.710068567" watchObservedRunningTime="2025-12-11 10:13:38.85303463 +0000 UTC m=+1191.712897943" Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.862430 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vxrd2" event={"ID":"041f7301-c875-4ebf-a917-6462c50316ce","Type":"ContainerStarted","Data":"37233fdaed4ebeddbea6deeb96029bb2921b9a98cb76a888ec67b249eaed0962"} Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.862553 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vxrd2" event={"ID":"041f7301-c875-4ebf-a917-6462c50316ce","Type":"ContainerStarted","Data":"4eb03bee4b55f5e7a40bbb385c4061c093d8fcf770bb78758a92142fe0c2249b"} Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.929408 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pbrhh"] Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.944844 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-vxrd2" podStartSLOduration=2.944813173 podStartE2EDuration="2.944813173s" podCreationTimestamp="2025-12-11 10:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:13:38.882654245 +0000 UTC m=+1191.742517558" watchObservedRunningTime="2025-12-11 10:13:38.944813173 +0000 UTC m=+1191.804676486" Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.976205 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-626c-account-create-update-46ncn"] Dec 11 10:13:38 crc kubenswrapper[4746]: I1211 10:13:38.992764 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-2f33-account-create-update-4plh6"] Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.472415 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.877867 4746 generic.go:334] "Generic (PLEG): container finished" podID="fb138237-45b8-4bd9-a20d-0125fbac9770" containerID="a64766f15ac7662c2bbcf7258aa33fbe77259b960f0fc146f2021e7c9ae1e52d" exitCode=0 Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.878332 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3ba3-account-create-update-lztww" event={"ID":"fb138237-45b8-4bd9-a20d-0125fbac9770","Type":"ContainerDied","Data":"a64766f15ac7662c2bbcf7258aa33fbe77259b960f0fc146f2021e7c9ae1e52d"} Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.883362 4746 generic.go:334] "Generic (PLEG): container finished" podID="f531d95a-9fa0-4897-bdea-ee3e43914203" containerID="911ecaedc7f0b2cd2dac2054b4ebc17c47bd5e59baf721d3d50cfec2dc01e3a1" exitCode=0 Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.883443 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-626c-account-create-update-46ncn" event={"ID":"f531d95a-9fa0-4897-bdea-ee3e43914203","Type":"ContainerDied","Data":"911ecaedc7f0b2cd2dac2054b4ebc17c47bd5e59baf721d3d50cfec2dc01e3a1"} Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.883467 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-626c-account-create-update-46ncn" event={"ID":"f531d95a-9fa0-4897-bdea-ee3e43914203","Type":"ContainerStarted","Data":"f83fb67abb7e7af0d2e1c7a8aae97a3e2142e335daeb19f4cba358ac8c65b702"} Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.885377 4746 generic.go:334] "Generic (PLEG): container finished" podID="041f7301-c875-4ebf-a917-6462c50316ce" containerID="37233fdaed4ebeddbea6deeb96029bb2921b9a98cb76a888ec67b249eaed0962" exitCode=0 Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.885514 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vxrd2" event={"ID":"041f7301-c875-4ebf-a917-6462c50316ce","Type":"ContainerDied","Data":"37233fdaed4ebeddbea6deeb96029bb2921b9a98cb76a888ec67b249eaed0962"} Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.887604 4746 generic.go:334] "Generic (PLEG): container finished" podID="e8bce084-6da4-4c08-804f-1dd5d7cfdd8a" containerID="f7f3521ec299e4cfbd209fb3e292113d40932640b9bd11742f7c25d17eeecd8b" exitCode=0 Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.887640 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-757q5" event={"ID":"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a","Type":"ContainerDied","Data":"f7f3521ec299e4cfbd209fb3e292113d40932640b9bd11742f7c25d17eeecd8b"} Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.889149 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e24f590f-4ee4-47c4-b3bb-89279ed4acbf","Type":"ContainerStarted","Data":"8aabe732ef936f5735d39c50b76cab5561901e3e56a01bfae13995a193cd5765"} Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.890674 4746 generic.go:334] "Generic (PLEG): container finished" podID="19ff8d66-0de2-4a8f-977a-810857fc5103" containerID="952fd9a208e4982afaf0814269e62ab337a99f62e259927b756f734180a87d31" exitCode=0 Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.890720 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pbrhh" event={"ID":"19ff8d66-0de2-4a8f-977a-810857fc5103","Type":"ContainerDied","Data":"952fd9a208e4982afaf0814269e62ab337a99f62e259927b756f734180a87d31"} Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.890737 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pbrhh" event={"ID":"19ff8d66-0de2-4a8f-977a-810857fc5103","Type":"ContainerStarted","Data":"7264c20f7e221649903ad296d118015ad6bbf2f2c8c1cdd007a4a4942bf27b4e"} Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.891998 4746 generic.go:334] "Generic (PLEG): container finished" podID="da4dce39-8df9-4243-9c8e-268f8f662c97" containerID="18ed54230b33fbd7c3567795524ead5403f88f23ef97cbc043ce39c506eff7eb" exitCode=0 Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.892060 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2f33-account-create-update-4plh6" event={"ID":"da4dce39-8df9-4243-9c8e-268f8f662c97","Type":"ContainerDied","Data":"18ed54230b33fbd7c3567795524ead5403f88f23ef97cbc043ce39c506eff7eb"} Dec 11 10:13:39 crc kubenswrapper[4746]: I1211 10:13:39.892116 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2f33-account-create-update-4plh6" event={"ID":"da4dce39-8df9-4243-9c8e-268f8f662c97","Type":"ContainerStarted","Data":"b93f74f96c9eb9cc6a511d66a932c736412e9aaf00058f4c037613fc811642e4"} Dec 11 10:13:40 crc kubenswrapper[4746]: I1211 10:13:40.585844 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 10:13:40 crc kubenswrapper[4746]: I1211 10:13:40.585914 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 11 10:13:40 crc kubenswrapper[4746]: I1211 10:13:40.630984 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 10:13:40 crc kubenswrapper[4746]: I1211 10:13:40.635248 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 11 10:13:40 crc kubenswrapper[4746]: I1211 10:13:40.901483 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 10:13:40 crc kubenswrapper[4746]: I1211 10:13:40.902005 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.530644 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3ba3-account-create-update-lztww" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.616703 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb138237-45b8-4bd9-a20d-0125fbac9770-operator-scripts\") pod \"fb138237-45b8-4bd9-a20d-0125fbac9770\" (UID: \"fb138237-45b8-4bd9-a20d-0125fbac9770\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.617206 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-km2xx\" (UniqueName: \"kubernetes.io/projected/fb138237-45b8-4bd9-a20d-0125fbac9770-kube-api-access-km2xx\") pod \"fb138237-45b8-4bd9-a20d-0125fbac9770\" (UID: \"fb138237-45b8-4bd9-a20d-0125fbac9770\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.617543 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb138237-45b8-4bd9-a20d-0125fbac9770-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fb138237-45b8-4bd9-a20d-0125fbac9770" (UID: "fb138237-45b8-4bd9-a20d-0125fbac9770"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.618207 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fb138237-45b8-4bd9-a20d-0125fbac9770-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.628209 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb138237-45b8-4bd9-a20d-0125fbac9770-kube-api-access-km2xx" (OuterVolumeSpecName: "kube-api-access-km2xx") pod "fb138237-45b8-4bd9-a20d-0125fbac9770" (UID: "fb138237-45b8-4bd9-a20d-0125fbac9770"). InnerVolumeSpecName "kube-api-access-km2xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.722156 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-km2xx\" (UniqueName: \"kubernetes.io/projected/fb138237-45b8-4bd9-a20d-0125fbac9770-kube-api-access-km2xx\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.729747 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vxrd2" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.729927 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pbrhh" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.735411 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-757q5" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.767152 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2f33-account-create-update-4plh6" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.796286 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-626c-account-create-update-46ncn" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.841007 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkzb8\" (UniqueName: \"kubernetes.io/projected/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-kube-api-access-tkzb8\") pod \"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a\" (UID: \"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.841103 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhlqd\" (UniqueName: \"kubernetes.io/projected/19ff8d66-0de2-4a8f-977a-810857fc5103-kube-api-access-zhlqd\") pod \"19ff8d66-0de2-4a8f-977a-810857fc5103\" (UID: \"19ff8d66-0de2-4a8f-977a-810857fc5103\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.841134 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041f7301-c875-4ebf-a917-6462c50316ce-operator-scripts\") pod \"041f7301-c875-4ebf-a917-6462c50316ce\" (UID: \"041f7301-c875-4ebf-a917-6462c50316ce\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.841202 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-operator-scripts\") pod \"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a\" (UID: \"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.841246 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19ff8d66-0de2-4a8f-977a-810857fc5103-operator-scripts\") pod \"19ff8d66-0de2-4a8f-977a-810857fc5103\" (UID: \"19ff8d66-0de2-4a8f-977a-810857fc5103\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.841277 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hnlh\" (UniqueName: \"kubernetes.io/projected/041f7301-c875-4ebf-a917-6462c50316ce-kube-api-access-4hnlh\") pod \"041f7301-c875-4ebf-a917-6462c50316ce\" (UID: \"041f7301-c875-4ebf-a917-6462c50316ce\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.842129 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e8bce084-6da4-4c08-804f-1dd5d7cfdd8a" (UID: "e8bce084-6da4-4c08-804f-1dd5d7cfdd8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.842630 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/041f7301-c875-4ebf-a917-6462c50316ce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "041f7301-c875-4ebf-a917-6462c50316ce" (UID: "041f7301-c875-4ebf-a917-6462c50316ce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.845336 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19ff8d66-0de2-4a8f-977a-810857fc5103-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "19ff8d66-0de2-4a8f-977a-810857fc5103" (UID: "19ff8d66-0de2-4a8f-977a-810857fc5103"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.845857 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-kube-api-access-tkzb8" (OuterVolumeSpecName: "kube-api-access-tkzb8") pod "e8bce084-6da4-4c08-804f-1dd5d7cfdd8a" (UID: "e8bce084-6da4-4c08-804f-1dd5d7cfdd8a"). InnerVolumeSpecName "kube-api-access-tkzb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.846329 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19ff8d66-0de2-4a8f-977a-810857fc5103-kube-api-access-zhlqd" (OuterVolumeSpecName: "kube-api-access-zhlqd") pod "19ff8d66-0de2-4a8f-977a-810857fc5103" (UID: "19ff8d66-0de2-4a8f-977a-810857fc5103"). InnerVolumeSpecName "kube-api-access-zhlqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.849758 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041f7301-c875-4ebf-a917-6462c50316ce-kube-api-access-4hnlh" (OuterVolumeSpecName: "kube-api-access-4hnlh") pod "041f7301-c875-4ebf-a917-6462c50316ce" (UID: "041f7301-c875-4ebf-a917-6462c50316ce"). InnerVolumeSpecName "kube-api-access-4hnlh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.920850 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-2f33-account-create-update-4plh6" event={"ID":"da4dce39-8df9-4243-9c8e-268f8f662c97","Type":"ContainerDied","Data":"b93f74f96c9eb9cc6a511d66a932c736412e9aaf00058f4c037613fc811642e4"} Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.920896 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b93f74f96c9eb9cc6a511d66a932c736412e9aaf00058f4c037613fc811642e4" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.920971 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-2f33-account-create-update-4plh6" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.925762 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3ba3-account-create-update-lztww" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.925852 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3ba3-account-create-update-lztww" event={"ID":"fb138237-45b8-4bd9-a20d-0125fbac9770","Type":"ContainerDied","Data":"7b2f4b94d021de474349f15ce77624caad932c39dc689b0e938ebf01129a9e86"} Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.925929 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7b2f4b94d021de474349f15ce77624caad932c39dc689b0e938ebf01129a9e86" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.930849 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-626c-account-create-update-46ncn" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.930863 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-626c-account-create-update-46ncn" event={"ID":"f531d95a-9fa0-4897-bdea-ee3e43914203","Type":"ContainerDied","Data":"f83fb67abb7e7af0d2e1c7a8aae97a3e2142e335daeb19f4cba358ac8c65b702"} Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.930891 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f83fb67abb7e7af0d2e1c7a8aae97a3e2142e335daeb19f4cba358ac8c65b702" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.935717 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-vxrd2" event={"ID":"041f7301-c875-4ebf-a917-6462c50316ce","Type":"ContainerDied","Data":"4eb03bee4b55f5e7a40bbb385c4061c093d8fcf770bb78758a92142fe0c2249b"} Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.935779 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eb03bee4b55f5e7a40bbb385c4061c093d8fcf770bb78758a92142fe0c2249b" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.935866 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-vxrd2" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.938964 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-757q5" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.938951 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-757q5" event={"ID":"e8bce084-6da4-4c08-804f-1dd5d7cfdd8a","Type":"ContainerDied","Data":"dae7d363b4f3c2274517a704d1d725199965e62d9a288850e6ef63901e8549ab"} Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.939236 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dae7d363b4f3c2274517a704d1d725199965e62d9a288850e6ef63901e8549ab" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.942011 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pbrhh" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.942082 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pbrhh" event={"ID":"19ff8d66-0de2-4a8f-977a-810857fc5103","Type":"ContainerDied","Data":"7264c20f7e221649903ad296d118015ad6bbf2f2c8c1cdd007a4a4942bf27b4e"} Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.942131 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7264c20f7e221649903ad296d118015ad6bbf2f2c8c1cdd007a4a4942bf27b4e" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.947279 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn8zz\" (UniqueName: \"kubernetes.io/projected/da4dce39-8df9-4243-9c8e-268f8f662c97-kube-api-access-pn8zz\") pod \"da4dce39-8df9-4243-9c8e-268f8f662c97\" (UID: \"da4dce39-8df9-4243-9c8e-268f8f662c97\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.947365 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f531d95a-9fa0-4897-bdea-ee3e43914203-operator-scripts\") pod \"f531d95a-9fa0-4897-bdea-ee3e43914203\" (UID: \"f531d95a-9fa0-4897-bdea-ee3e43914203\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.947498 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4dce39-8df9-4243-9c8e-268f8f662c97-operator-scripts\") pod \"da4dce39-8df9-4243-9c8e-268f8f662c97\" (UID: \"da4dce39-8df9-4243-9c8e-268f8f662c97\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.947528 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v82q2\" (UniqueName: \"kubernetes.io/projected/f531d95a-9fa0-4897-bdea-ee3e43914203-kube-api-access-v82q2\") pod \"f531d95a-9fa0-4897-bdea-ee3e43914203\" (UID: \"f531d95a-9fa0-4897-bdea-ee3e43914203\") " Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.948095 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da4dce39-8df9-4243-9c8e-268f8f662c97-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da4dce39-8df9-4243-9c8e-268f8f662c97" (UID: "da4dce39-8df9-4243-9c8e-268f8f662c97"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.948179 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f531d95a-9fa0-4897-bdea-ee3e43914203-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f531d95a-9fa0-4897-bdea-ee3e43914203" (UID: "f531d95a-9fa0-4897-bdea-ee3e43914203"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.948625 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f531d95a-9fa0-4897-bdea-ee3e43914203-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.948654 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da4dce39-8df9-4243-9c8e-268f8f662c97-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.948667 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkzb8\" (UniqueName: \"kubernetes.io/projected/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-kube-api-access-tkzb8\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.948684 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhlqd\" (UniqueName: \"kubernetes.io/projected/19ff8d66-0de2-4a8f-977a-810857fc5103-kube-api-access-zhlqd\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.948696 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/041f7301-c875-4ebf-a917-6462c50316ce-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.948710 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.948721 4746 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/19ff8d66-0de2-4a8f-977a-810857fc5103-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.948731 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hnlh\" (UniqueName: \"kubernetes.io/projected/041f7301-c875-4ebf-a917-6462c50316ce-kube-api-access-4hnlh\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.951620 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da4dce39-8df9-4243-9c8e-268f8f662c97-kube-api-access-pn8zz" (OuterVolumeSpecName: "kube-api-access-pn8zz") pod "da4dce39-8df9-4243-9c8e-268f8f662c97" (UID: "da4dce39-8df9-4243-9c8e-268f8f662c97"). InnerVolumeSpecName "kube-api-access-pn8zz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.954445 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f531d95a-9fa0-4897-bdea-ee3e43914203-kube-api-access-v82q2" (OuterVolumeSpecName: "kube-api-access-v82q2") pod "f531d95a-9fa0-4897-bdea-ee3e43914203" (UID: "f531d95a-9fa0-4897-bdea-ee3e43914203"). InnerVolumeSpecName "kube-api-access-v82q2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:13:41 crc kubenswrapper[4746]: I1211 10:13:41.970321 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e24f590f-4ee4-47c4-b3bb-89279ed4acbf","Type":"ContainerStarted","Data":"cf14edda06ca2f03c4478d4cb9067a162525121fef33f8b88c966d8dce691fdb"} Dec 11 10:13:42 crc kubenswrapper[4746]: I1211 10:13:42.050594 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn8zz\" (UniqueName: \"kubernetes.io/projected/da4dce39-8df9-4243-9c8e-268f8f662c97-kube-api-access-pn8zz\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:42 crc kubenswrapper[4746]: I1211 10:13:42.050638 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v82q2\" (UniqueName: \"kubernetes.io/projected/f531d95a-9fa0-4897-bdea-ee3e43914203-kube-api-access-v82q2\") on node \"crc\" DevicePath \"\"" Dec 11 10:13:42 crc kubenswrapper[4746]: I1211 10:13:42.247910 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:42 crc kubenswrapper[4746]: I1211 10:13:42.248021 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:42 crc kubenswrapper[4746]: I1211 10:13:42.300478 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:42 crc kubenswrapper[4746]: I1211 10:13:42.307311 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:42 crc kubenswrapper[4746]: I1211 10:13:42.983173 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:42 crc kubenswrapper[4746]: I1211 10:13:42.983208 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:43 crc kubenswrapper[4746]: I1211 10:13:43.997975 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e24f590f-4ee4-47c4-b3bb-89279ed4acbf","Type":"ContainerStarted","Data":"f4e7c1fd7eed197b97968194365223d1885e44914abe0888f8265a6f03defffd"} Dec 11 10:13:44 crc kubenswrapper[4746]: I1211 10:13:44.230208 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 10:13:44 crc kubenswrapper[4746]: I1211 10:13:44.230761 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:13:44 crc kubenswrapper[4746]: I1211 10:13:44.231907 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 11 10:13:45 crc kubenswrapper[4746]: I1211 10:13:45.043412 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:13:45 crc kubenswrapper[4746]: I1211 10:13:45.045613 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:13:46 crc kubenswrapper[4746]: I1211 10:13:46.047341 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e24f590f-4ee4-47c4-b3bb-89279ed4acbf","Type":"ContainerStarted","Data":"6a76b08aec168b33077cb6f68b86233699624a48a6de28649c6bb3419ea04796"} Dec 11 10:13:46 crc kubenswrapper[4746]: I1211 10:13:46.048513 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="ceilometer-central-agent" containerID="cri-o://8aabe732ef936f5735d39c50b76cab5561901e3e56a01bfae13995a193cd5765" gracePeriod=30 Dec 11 10:13:46 crc kubenswrapper[4746]: I1211 10:13:46.048571 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:13:46 crc kubenswrapper[4746]: I1211 10:13:46.048593 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="sg-core" containerID="cri-o://f4e7c1fd7eed197b97968194365223d1885e44914abe0888f8265a6f03defffd" gracePeriod=30 Dec 11 10:13:46 crc kubenswrapper[4746]: I1211 10:13:46.048593 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="ceilometer-notification-agent" containerID="cri-o://cf14edda06ca2f03c4478d4cb9067a162525121fef33f8b88c966d8dce691fdb" gracePeriod=30 Dec 11 10:13:46 crc kubenswrapper[4746]: I1211 10:13:46.048586 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="proxy-httpd" containerID="cri-o://6a76b08aec168b33077cb6f68b86233699624a48a6de28649c6bb3419ea04796" gracePeriod=30 Dec 11 10:13:46 crc kubenswrapper[4746]: I1211 10:13:46.078203 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.036747003 podStartE2EDuration="10.078185623s" podCreationTimestamp="2025-12-11 10:13:36 +0000 UTC" firstStartedPulling="2025-12-11 10:13:38.553816865 +0000 UTC m=+1191.413680178" lastFinishedPulling="2025-12-11 10:13:44.595255485 +0000 UTC m=+1197.455118798" observedRunningTime="2025-12-11 10:13:46.071444222 +0000 UTC m=+1198.931307525" watchObservedRunningTime="2025-12-11 10:13:46.078185623 +0000 UTC m=+1198.938048936" Dec 11 10:13:46 crc kubenswrapper[4746]: I1211 10:13:46.416970 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:46 crc kubenswrapper[4746]: I1211 10:13:46.417107 4746 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 11 10:13:46 crc kubenswrapper[4746]: I1211 10:13:46.418947 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.191279 4746 generic.go:334] "Generic (PLEG): container finished" podID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerID="6a76b08aec168b33077cb6f68b86233699624a48a6de28649c6bb3419ea04796" exitCode=0 Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.191330 4746 generic.go:334] "Generic (PLEG): container finished" podID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerID="f4e7c1fd7eed197b97968194365223d1885e44914abe0888f8265a6f03defffd" exitCode=2 Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.191346 4746 generic.go:334] "Generic (PLEG): container finished" podID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerID="cf14edda06ca2f03c4478d4cb9067a162525121fef33f8b88c966d8dce691fdb" exitCode=0 Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.192430 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e24f590f-4ee4-47c4-b3bb-89279ed4acbf","Type":"ContainerDied","Data":"6a76b08aec168b33077cb6f68b86233699624a48a6de28649c6bb3419ea04796"} Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.192470 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e24f590f-4ee4-47c4-b3bb-89279ed4acbf","Type":"ContainerDied","Data":"f4e7c1fd7eed197b97968194365223d1885e44914abe0888f8265a6f03defffd"} Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.192486 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e24f590f-4ee4-47c4-b3bb-89279ed4acbf","Type":"ContainerDied","Data":"cf14edda06ca2f03c4478d4cb9067a162525121fef33f8b88c966d8dce691fdb"} Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.741224 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f6gcs"] Dec 11 10:13:47 crc kubenswrapper[4746]: E1211 10:13:47.742778 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8bce084-6da4-4c08-804f-1dd5d7cfdd8a" containerName="mariadb-database-create" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.742808 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8bce084-6da4-4c08-804f-1dd5d7cfdd8a" containerName="mariadb-database-create" Dec 11 10:13:47 crc kubenswrapper[4746]: E1211 10:13:47.742858 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f531d95a-9fa0-4897-bdea-ee3e43914203" containerName="mariadb-account-create-update" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.742870 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f531d95a-9fa0-4897-bdea-ee3e43914203" containerName="mariadb-account-create-update" Dec 11 10:13:47 crc kubenswrapper[4746]: E1211 10:13:47.742890 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19ff8d66-0de2-4a8f-977a-810857fc5103" containerName="mariadb-database-create" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.742900 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="19ff8d66-0de2-4a8f-977a-810857fc5103" containerName="mariadb-database-create" Dec 11 10:13:47 crc kubenswrapper[4746]: E1211 10:13:47.742934 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb138237-45b8-4bd9-a20d-0125fbac9770" containerName="mariadb-account-create-update" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.742946 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb138237-45b8-4bd9-a20d-0125fbac9770" containerName="mariadb-account-create-update" Dec 11 10:13:47 crc kubenswrapper[4746]: E1211 10:13:47.742966 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041f7301-c875-4ebf-a917-6462c50316ce" containerName="mariadb-database-create" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.742978 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="041f7301-c875-4ebf-a917-6462c50316ce" containerName="mariadb-database-create" Dec 11 10:13:47 crc kubenswrapper[4746]: E1211 10:13:47.743011 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da4dce39-8df9-4243-9c8e-268f8f662c97" containerName="mariadb-account-create-update" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.743020 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="da4dce39-8df9-4243-9c8e-268f8f662c97" containerName="mariadb-account-create-update" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.744212 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb138237-45b8-4bd9-a20d-0125fbac9770" containerName="mariadb-account-create-update" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.744257 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8bce084-6da4-4c08-804f-1dd5d7cfdd8a" containerName="mariadb-database-create" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.744282 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f531d95a-9fa0-4897-bdea-ee3e43914203" containerName="mariadb-account-create-update" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.744298 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="19ff8d66-0de2-4a8f-977a-810857fc5103" containerName="mariadb-database-create" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.744325 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="da4dce39-8df9-4243-9c8e-268f8f662c97" containerName="mariadb-account-create-update" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.744361 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="041f7301-c875-4ebf-a917-6462c50316ce" containerName="mariadb-database-create" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.750632 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.759877 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.760163 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.772407 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ftpqv" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.780812 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f6gcs"] Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.972875 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd8xc\" (UniqueName: \"kubernetes.io/projected/53396ba8-39b2-43dc-a0d3-acef5fb61cda-kube-api-access-xd8xc\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.973390 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.973564 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-scripts\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:47 crc kubenswrapper[4746]: I1211 10:13:47.981369 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-config-data\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:48 crc kubenswrapper[4746]: I1211 10:13:48.083859 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-scripts\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:48 crc kubenswrapper[4746]: I1211 10:13:48.083993 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-config-data\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:48 crc kubenswrapper[4746]: I1211 10:13:48.084019 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd8xc\" (UniqueName: \"kubernetes.io/projected/53396ba8-39b2-43dc-a0d3-acef5fb61cda-kube-api-access-xd8xc\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:48 crc kubenswrapper[4746]: I1211 10:13:48.084067 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:48 crc kubenswrapper[4746]: I1211 10:13:48.091610 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-scripts\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:48 crc kubenswrapper[4746]: I1211 10:13:48.101698 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:48 crc kubenswrapper[4746]: I1211 10:13:48.126700 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-config-data\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:48 crc kubenswrapper[4746]: I1211 10:13:48.137579 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd8xc\" (UniqueName: \"kubernetes.io/projected/53396ba8-39b2-43dc-a0d3-acef5fb61cda-kube-api-access-xd8xc\") pod \"nova-cell0-conductor-db-sync-f6gcs\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:48 crc kubenswrapper[4746]: I1211 10:13:48.399133 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:13:48 crc kubenswrapper[4746]: W1211 10:13:48.877496 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53396ba8_39b2_43dc_a0d3_acef5fb61cda.slice/crio-7210d811ec51829d9620588df82a775a8981f3aa87029ac5ad35aaed55a2e7d4 WatchSource:0}: Error finding container 7210d811ec51829d9620588df82a775a8981f3aa87029ac5ad35aaed55a2e7d4: Status 404 returned error can't find the container with id 7210d811ec51829d9620588df82a775a8981f3aa87029ac5ad35aaed55a2e7d4 Dec 11 10:13:48 crc kubenswrapper[4746]: I1211 10:13:48.883072 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f6gcs"] Dec 11 10:13:49 crc kubenswrapper[4746]: I1211 10:13:49.210294 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f6gcs" event={"ID":"53396ba8-39b2-43dc-a0d3-acef5fb61cda","Type":"ContainerStarted","Data":"7210d811ec51829d9620588df82a775a8981f3aa87029ac5ad35aaed55a2e7d4"} Dec 11 10:13:53 crc kubenswrapper[4746]: I1211 10:13:53.507686 4746 generic.go:334] "Generic (PLEG): container finished" podID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerID="8aabe732ef936f5735d39c50b76cab5561901e3e56a01bfae13995a193cd5765" exitCode=0 Dec 11 10:13:53 crc kubenswrapper[4746]: I1211 10:13:53.507764 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e24f590f-4ee4-47c4-b3bb-89279ed4acbf","Type":"ContainerDied","Data":"8aabe732ef936f5735d39c50b76cab5561901e3e56a01bfae13995a193cd5765"} Dec 11 10:13:59 crc kubenswrapper[4746]: I1211 10:13:59.877832 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:13:59 crc kubenswrapper[4746]: I1211 10:13:59.879214 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.520293 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.625972 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-log-httpd\") pod \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.626540 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-run-httpd\") pod \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.626662 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfsm8\" (UniqueName: \"kubernetes.io/projected/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-kube-api-access-hfsm8\") pod \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.626697 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-combined-ca-bundle\") pod \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.626836 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-config-data\") pod \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.626931 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-scripts\") pod \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.626956 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-sg-core-conf-yaml\") pod \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\" (UID: \"e24f590f-4ee4-47c4-b3bb-89279ed4acbf\") " Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.630329 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e24f590f-4ee4-47c4-b3bb-89279ed4acbf" (UID: "e24f590f-4ee4-47c4-b3bb-89279ed4acbf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.630918 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e24f590f-4ee4-47c4-b3bb-89279ed4acbf" (UID: "e24f590f-4ee4-47c4-b3bb-89279ed4acbf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.634330 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-kube-api-access-hfsm8" (OuterVolumeSpecName: "kube-api-access-hfsm8") pod "e24f590f-4ee4-47c4-b3bb-89279ed4acbf" (UID: "e24f590f-4ee4-47c4-b3bb-89279ed4acbf"). InnerVolumeSpecName "kube-api-access-hfsm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.648245 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-scripts" (OuterVolumeSpecName: "scripts") pod "e24f590f-4ee4-47c4-b3bb-89279ed4acbf" (UID: "e24f590f-4ee4-47c4-b3bb-89279ed4acbf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.672135 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e24f590f-4ee4-47c4-b3bb-89279ed4acbf" (UID: "e24f590f-4ee4-47c4-b3bb-89279ed4acbf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.729662 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfsm8\" (UniqueName: \"kubernetes.io/projected/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-kube-api-access-hfsm8\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.729706 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.729722 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.729734 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.729747 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.771827 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e24f590f-4ee4-47c4-b3bb-89279ed4acbf" (UID: "e24f590f-4ee4-47c4-b3bb-89279ed4acbf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.803226 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-config-data" (OuterVolumeSpecName: "config-data") pod "e24f590f-4ee4-47c4-b3bb-89279ed4acbf" (UID: "e24f590f-4ee4-47c4-b3bb-89279ed4acbf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.832011 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.832073 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e24f590f-4ee4-47c4-b3bb-89279ed4acbf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.851601 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e24f590f-4ee4-47c4-b3bb-89279ed4acbf","Type":"ContainerDied","Data":"d21496c8548eee5d15112e4c54914ba761555677cc1ecbb6e02ee3434c6dff93"} Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.851691 4746 scope.go:117] "RemoveContainer" containerID="6a76b08aec168b33077cb6f68b86233699624a48a6de28649c6bb3419ea04796" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.851715 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.886545 4746 scope.go:117] "RemoveContainer" containerID="f4e7c1fd7eed197b97968194365223d1885e44914abe0888f8265a6f03defffd" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.912620 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.928508 4746 scope.go:117] "RemoveContainer" containerID="cf14edda06ca2f03c4478d4cb9067a162525121fef33f8b88c966d8dce691fdb" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.929403 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.950338 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:00 crc kubenswrapper[4746]: E1211 10:14:00.951392 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="sg-core" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.951426 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="sg-core" Dec 11 10:14:00 crc kubenswrapper[4746]: E1211 10:14:00.951446 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="ceilometer-central-agent" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.951454 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="ceilometer-central-agent" Dec 11 10:14:00 crc kubenswrapper[4746]: E1211 10:14:00.951481 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="proxy-httpd" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.951494 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="proxy-httpd" Dec 11 10:14:00 crc kubenswrapper[4746]: E1211 10:14:00.951506 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="ceilometer-notification-agent" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.951513 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="ceilometer-notification-agent" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.951857 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="sg-core" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.952147 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="ceilometer-notification-agent" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.952161 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="proxy-httpd" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.952197 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" containerName="ceilometer-central-agent" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.955173 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.959887 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.964890 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.980485 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:00 crc kubenswrapper[4746]: I1211 10:14:00.984767 4746 scope.go:117] "RemoveContainer" containerID="8aabe732ef936f5735d39c50b76cab5561901e3e56a01bfae13995a193cd5765" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.138536 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wjgl\" (UniqueName: \"kubernetes.io/projected/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-kube-api-access-7wjgl\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.138590 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-config-data\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.138648 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.138748 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.138790 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-scripts\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.138841 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-run-httpd\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.138927 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-log-httpd\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.240594 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-scripts\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.241034 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-run-httpd\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.241186 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-log-httpd\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.241258 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wjgl\" (UniqueName: \"kubernetes.io/projected/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-kube-api-access-7wjgl\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.241282 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-config-data\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.241321 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.241396 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.242440 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-log-httpd\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.242598 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-run-httpd\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.246257 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.247901 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.248552 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-scripts\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.250579 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-config-data\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.267327 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wjgl\" (UniqueName: \"kubernetes.io/projected/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-kube-api-access-7wjgl\") pod \"ceilometer-0\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.279402 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.645683 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e24f590f-4ee4-47c4-b3bb-89279ed4acbf" path="/var/lib/kubelet/pods/e24f590f-4ee4-47c4-b3bb-89279ed4acbf/volumes" Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.823653 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.868459 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f6gcs" event={"ID":"53396ba8-39b2-43dc-a0d3-acef5fb61cda","Type":"ContainerStarted","Data":"3c29ae587a217519d5f98e11075e05e86d821c2579c36eab6200220db20a34eb"} Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.872194 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b02cf3-c616-4a16-b988-ff8d42abb3ba","Type":"ContainerStarted","Data":"a83dfaa61f16a521446a14969e619a9f12e8c1a34e007663242cfa3bc8f9de08"} Dec 11 10:14:01 crc kubenswrapper[4746]: I1211 10:14:01.906386 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-f6gcs" podStartSLOduration=3.282934075 podStartE2EDuration="14.906360118s" podCreationTimestamp="2025-12-11 10:13:47 +0000 UTC" firstStartedPulling="2025-12-11 10:13:48.88019257 +0000 UTC m=+1201.740055883" lastFinishedPulling="2025-12-11 10:14:00.503618613 +0000 UTC m=+1213.363481926" observedRunningTime="2025-12-11 10:14:01.898832225 +0000 UTC m=+1214.758695538" watchObservedRunningTime="2025-12-11 10:14:01.906360118 +0000 UTC m=+1214.766223441" Dec 11 10:14:04 crc kubenswrapper[4746]: I1211 10:14:04.495475 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:14:04 crc kubenswrapper[4746]: I1211 10:14:04.929526 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b02cf3-c616-4a16-b988-ff8d42abb3ba","Type":"ContainerStarted","Data":"dbc24545dc79d23fe04b3a406c21f6c0a47273a9aaaec467edadfb22e8d87b2c"} Dec 11 10:14:06 crc kubenswrapper[4746]: I1211 10:14:06.962433 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b02cf3-c616-4a16-b988-ff8d42abb3ba","Type":"ContainerStarted","Data":"548c68026460c67b8190a66c98c406eab44ff1c5750e333f55f1f4ca50b1d0cf"} Dec 11 10:14:07 crc kubenswrapper[4746]: I1211 10:14:07.978625 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b02cf3-c616-4a16-b988-ff8d42abb3ba","Type":"ContainerStarted","Data":"7ed8334587fa9a8fea0302b9651fe0c3bf0b5042ee5555c4b303ffd42cd7e46b"} Dec 11 10:14:10 crc kubenswrapper[4746]: I1211 10:14:10.028203 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b02cf3-c616-4a16-b988-ff8d42abb3ba","Type":"ContainerStarted","Data":"0d6590a60d1d4dee9351b988e20be31f1233011703e393311a3e4448498bf22e"} Dec 11 10:14:10 crc kubenswrapper[4746]: I1211 10:14:10.029420 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:14:10 crc kubenswrapper[4746]: I1211 10:14:10.066718 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.934932636 podStartE2EDuration="10.066692852s" podCreationTimestamp="2025-12-11 10:14:00 +0000 UTC" firstStartedPulling="2025-12-11 10:14:01.841785074 +0000 UTC m=+1214.701648387" lastFinishedPulling="2025-12-11 10:14:08.97354525 +0000 UTC m=+1221.833408603" observedRunningTime="2025-12-11 10:14:10.062102149 +0000 UTC m=+1222.921965462" watchObservedRunningTime="2025-12-11 10:14:10.066692852 +0000 UTC m=+1222.926556165" Dec 11 10:14:14 crc kubenswrapper[4746]: I1211 10:14:14.076581 4746 generic.go:334] "Generic (PLEG): container finished" podID="53396ba8-39b2-43dc-a0d3-acef5fb61cda" containerID="3c29ae587a217519d5f98e11075e05e86d821c2579c36eab6200220db20a34eb" exitCode=0 Dec 11 10:14:14 crc kubenswrapper[4746]: I1211 10:14:14.076667 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f6gcs" event={"ID":"53396ba8-39b2-43dc-a0d3-acef5fb61cda","Type":"ContainerDied","Data":"3c29ae587a217519d5f98e11075e05e86d821c2579c36eab6200220db20a34eb"} Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.438694 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.595661 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd8xc\" (UniqueName: \"kubernetes.io/projected/53396ba8-39b2-43dc-a0d3-acef5fb61cda-kube-api-access-xd8xc\") pod \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.595712 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-combined-ca-bundle\") pod \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.595840 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-config-data\") pod \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.595977 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-scripts\") pod \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\" (UID: \"53396ba8-39b2-43dc-a0d3-acef5fb61cda\") " Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.604485 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53396ba8-39b2-43dc-a0d3-acef5fb61cda-kube-api-access-xd8xc" (OuterVolumeSpecName: "kube-api-access-xd8xc") pod "53396ba8-39b2-43dc-a0d3-acef5fb61cda" (UID: "53396ba8-39b2-43dc-a0d3-acef5fb61cda"). InnerVolumeSpecName "kube-api-access-xd8xc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.609536 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-scripts" (OuterVolumeSpecName: "scripts") pod "53396ba8-39b2-43dc-a0d3-acef5fb61cda" (UID: "53396ba8-39b2-43dc-a0d3-acef5fb61cda"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.627958 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53396ba8-39b2-43dc-a0d3-acef5fb61cda" (UID: "53396ba8-39b2-43dc-a0d3-acef5fb61cda"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.636809 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-config-data" (OuterVolumeSpecName: "config-data") pod "53396ba8-39b2-43dc-a0d3-acef5fb61cda" (UID: "53396ba8-39b2-43dc-a0d3-acef5fb61cda"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.698490 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.698531 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd8xc\" (UniqueName: \"kubernetes.io/projected/53396ba8-39b2-43dc-a0d3-acef5fb61cda-kube-api-access-xd8xc\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.698547 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:15 crc kubenswrapper[4746]: I1211 10:14:15.698559 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53396ba8-39b2-43dc-a0d3-acef5fb61cda-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.096977 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-f6gcs" event={"ID":"53396ba8-39b2-43dc-a0d3-acef5fb61cda","Type":"ContainerDied","Data":"7210d811ec51829d9620588df82a775a8981f3aa87029ac5ad35aaed55a2e7d4"} Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.097285 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7210d811ec51829d9620588df82a775a8981f3aa87029ac5ad35aaed55a2e7d4" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.097131 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-f6gcs" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.218953 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 10:14:16 crc kubenswrapper[4746]: E1211 10:14:16.219502 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53396ba8-39b2-43dc-a0d3-acef5fb61cda" containerName="nova-cell0-conductor-db-sync" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.219530 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="53396ba8-39b2-43dc-a0d3-acef5fb61cda" containerName="nova-cell0-conductor-db-sync" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.219741 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="53396ba8-39b2-43dc-a0d3-acef5fb61cda" containerName="nova-cell0-conductor-db-sync" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.220586 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.223732 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-ftpqv" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.225506 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.228163 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.311064 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af9e7da-bc61-40ee-8c58-9f2201d12884-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5af9e7da-bc61-40ee-8c58-9f2201d12884\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.311374 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkqnn\" (UniqueName: \"kubernetes.io/projected/5af9e7da-bc61-40ee-8c58-9f2201d12884-kube-api-access-vkqnn\") pod \"nova-cell0-conductor-0\" (UID: \"5af9e7da-bc61-40ee-8c58-9f2201d12884\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.311468 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af9e7da-bc61-40ee-8c58-9f2201d12884-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5af9e7da-bc61-40ee-8c58-9f2201d12884\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.412667 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af9e7da-bc61-40ee-8c58-9f2201d12884-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5af9e7da-bc61-40ee-8c58-9f2201d12884\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.413142 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkqnn\" (UniqueName: \"kubernetes.io/projected/5af9e7da-bc61-40ee-8c58-9f2201d12884-kube-api-access-vkqnn\") pod \"nova-cell0-conductor-0\" (UID: \"5af9e7da-bc61-40ee-8c58-9f2201d12884\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.413206 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af9e7da-bc61-40ee-8c58-9f2201d12884-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5af9e7da-bc61-40ee-8c58-9f2201d12884\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.418367 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5af9e7da-bc61-40ee-8c58-9f2201d12884-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5af9e7da-bc61-40ee-8c58-9f2201d12884\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.421588 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5af9e7da-bc61-40ee-8c58-9f2201d12884-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5af9e7da-bc61-40ee-8c58-9f2201d12884\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.430543 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkqnn\" (UniqueName: \"kubernetes.io/projected/5af9e7da-bc61-40ee-8c58-9f2201d12884-kube-api-access-vkqnn\") pod \"nova-cell0-conductor-0\" (UID: \"5af9e7da-bc61-40ee-8c58-9f2201d12884\") " pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.550859 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:16 crc kubenswrapper[4746]: I1211 10:14:16.990913 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 11 10:14:16 crc kubenswrapper[4746]: W1211 10:14:16.995947 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5af9e7da_bc61_40ee_8c58_9f2201d12884.slice/crio-1305bda90bfa1bb975ad7649f3a4ccd1b5dda94ea38cf6b985b718e67ef1b973 WatchSource:0}: Error finding container 1305bda90bfa1bb975ad7649f3a4ccd1b5dda94ea38cf6b985b718e67ef1b973: Status 404 returned error can't find the container with id 1305bda90bfa1bb975ad7649f3a4ccd1b5dda94ea38cf6b985b718e67ef1b973 Dec 11 10:14:17 crc kubenswrapper[4746]: I1211 10:14:17.188229 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5af9e7da-bc61-40ee-8c58-9f2201d12884","Type":"ContainerStarted","Data":"1305bda90bfa1bb975ad7649f3a4ccd1b5dda94ea38cf6b985b718e67ef1b973"} Dec 11 10:14:18 crc kubenswrapper[4746]: I1211 10:14:18.205294 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5af9e7da-bc61-40ee-8c58-9f2201d12884","Type":"ContainerStarted","Data":"7e6f8426719eb41c4a9cc59a12632b19697a662bacf64f72938aafb3f6de6b0a"} Dec 11 10:14:18 crc kubenswrapper[4746]: I1211 10:14:18.206955 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:18 crc kubenswrapper[4746]: I1211 10:14:18.240520 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.240489138 podStartE2EDuration="2.240489138s" podCreationTimestamp="2025-12-11 10:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:18.224452757 +0000 UTC m=+1231.084316100" watchObservedRunningTime="2025-12-11 10:14:18.240489138 +0000 UTC m=+1231.100352481" Dec 11 10:14:26 crc kubenswrapper[4746]: I1211 10:14:26.591276 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.077013 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-zhfc5"] Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.078784 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.082255 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.085968 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.095242 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zhfc5"] Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.212312 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-scripts\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.212463 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c92k\" (UniqueName: \"kubernetes.io/projected/67204404-706d-4886-bb9d-ffa996f7bd90-kube-api-access-5c92k\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.212522 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.212556 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-config-data\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.291378 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.293439 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.304241 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.308417 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.317306 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-scripts\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.317417 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c92k\" (UniqueName: \"kubernetes.io/projected/67204404-706d-4886-bb9d-ffa996f7bd90-kube-api-access-5c92k\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.317465 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.317502 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-config-data\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.336498 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-scripts\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.347749 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.359936 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-config-data\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.368643 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c92k\" (UniqueName: \"kubernetes.io/projected/67204404-706d-4886-bb9d-ffa996f7bd90-kube-api-access-5c92k\") pod \"nova-cell0-cell-mapping-zhfc5\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.381116 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.384587 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.396429 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.405010 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.423614 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.424152 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fntgg\" (UniqueName: \"kubernetes.io/projected/08a8ffb5-96db-4f37-aabb-8079f481a245-kube-api-access-fntgg\") pod \"nova-cell1-novncproxy-0\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.424222 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.424829 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.515624 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.524722 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.526666 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-config-data\") pod \"nova-scheduler-0\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.526730 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbgc\" (UniqueName: \"kubernetes.io/projected/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-kube-api-access-cdbgc\") pod \"nova-scheduler-0\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.526825 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.526843 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fntgg\" (UniqueName: \"kubernetes.io/projected/08a8ffb5-96db-4f37-aabb-8079f481a245-kube-api-access-fntgg\") pod \"nova-cell1-novncproxy-0\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.526865 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.526894 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.554964 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.556980 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.558732 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.593882 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.642251 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c76243-c909-48db-849e-af02769440b5-logs\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.642331 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.642413 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-config-data\") pod \"nova-scheduler-0\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.642458 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdbgc\" (UniqueName: \"kubernetes.io/projected/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-kube-api-access-cdbgc\") pod \"nova-scheduler-0\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.642484 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-config-data\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.642534 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5znv2\" (UniqueName: \"kubernetes.io/projected/70c76243-c909-48db-849e-af02769440b5-kube-api-access-5znv2\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.642582 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.663670 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fntgg\" (UniqueName: \"kubernetes.io/projected/08a8ffb5-96db-4f37-aabb-8079f481a245-kube-api-access-fntgg\") pod \"nova-cell1-novncproxy-0\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.666152 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.669956 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-config-data\") pod \"nova-scheduler-0\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.702905 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdbgc\" (UniqueName: \"kubernetes.io/projected/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-kube-api-access-cdbgc\") pod \"nova-scheduler-0\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.734781 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.737473 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.747636 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.757658 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.758830 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c76243-c909-48db-849e-af02769440b5-logs\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.749142 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c76243-c909-48db-849e-af02769440b5-logs\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.765094 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-config-data\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.790289 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5znv2\" (UniqueName: \"kubernetes.io/projected/70c76243-c909-48db-849e-af02769440b5-kube-api-access-5znv2\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.790419 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.819815 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.831137 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.850612 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-config-data\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.897343 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-config-data\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.897433 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.897466 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62a6777c-3846-4084-983e-431b9f62d870-logs\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.897599 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gbq9\" (UniqueName: \"kubernetes.io/projected/62a6777c-3846-4084-983e-431b9f62d870-kube-api-access-9gbq9\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.900811 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5znv2\" (UniqueName: \"kubernetes.io/projected/70c76243-c909-48db-849e-af02769440b5-kube-api-access-5znv2\") pod \"nova-metadata-0\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " pod="openstack/nova-metadata-0" Dec 11 10:14:27 crc kubenswrapper[4746]: I1211 10:14:27.950806 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.009076 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gbq9\" (UniqueName: \"kubernetes.io/projected/62a6777c-3846-4084-983e-431b9f62d870-kube-api-access-9gbq9\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.009319 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-config-data\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.009421 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.009461 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62a6777c-3846-4084-983e-431b9f62d870-logs\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.010413 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62a6777c-3846-4084-983e-431b9f62d870-logs\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.024245 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.038852 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-5ngml"] Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.054572 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-config-data\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.068339 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.076614 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.108707 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gbq9\" (UniqueName: \"kubernetes.io/projected/62a6777c-3846-4084-983e-431b9f62d870-kube-api-access-9gbq9\") pod \"nova-api-0\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " pod="openstack/nova-api-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.170149 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-5ngml"] Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.231606 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5crkx\" (UniqueName: \"kubernetes.io/projected/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-kube-api-access-5crkx\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.232347 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-config\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.232386 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.232437 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-svc\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.232477 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.232610 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.334063 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.334141 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.334164 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5crkx\" (UniqueName: \"kubernetes.io/projected/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-kube-api-access-5crkx\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.334291 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-config\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.334328 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.334369 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-svc\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.336282 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-sb\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.336925 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-nb\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.337598 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-swift-storage-0\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.337823 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-config\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.341762 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-svc\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.362545 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5crkx\" (UniqueName: \"kubernetes.io/projected/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-kube-api-access-5crkx\") pod \"dnsmasq-dns-bccf8f775-5ngml\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.391450 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.411411 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.436774 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-zhfc5"] Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.768969 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:14:28 crc kubenswrapper[4746]: W1211 10:14:28.792183 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod415eccfe_8760_4f2d_bcb1_ac8acc8429dc.slice/crio-ba9667ec13f02d3054ab2f053fddc89b8ff5ac6ca899824a7fed315b4d76eaa0 WatchSource:0}: Error finding container ba9667ec13f02d3054ab2f053fddc89b8ff5ac6ca899824a7fed315b4d76eaa0: Status 404 returned error can't find the container with id ba9667ec13f02d3054ab2f053fddc89b8ff5ac6ca899824a7fed315b4d76eaa0 Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.811386 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hx64l"] Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.815513 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.820626 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.821682 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.847303 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hx64l"] Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.908735 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:14:28 crc kubenswrapper[4746]: I1211 10:14:28.926248 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:14:28 crc kubenswrapper[4746]: W1211 10:14:28.927531 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08a8ffb5_96db_4f37_aabb_8079f481a245.slice/crio-709da73521e1210c95edbdf5f9adf4b37c29962cce61e3c73b9345f5c1d530af WatchSource:0}: Error finding container 709da73521e1210c95edbdf5f9adf4b37c29962cce61e3c73b9345f5c1d530af: Status 404 returned error can't find the container with id 709da73521e1210c95edbdf5f9adf4b37c29962cce61e3c73b9345f5c1d530af Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.018851 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-config-data\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.019473 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-scripts\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.019542 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.019641 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5sxf\" (UniqueName: \"kubernetes.io/projected/94ccb677-c026-43d0-8a64-b5267ee040e3-kube-api-access-r5sxf\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.130476 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.130736 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-scripts\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.130915 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.131188 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5sxf\" (UniqueName: \"kubernetes.io/projected/94ccb677-c026-43d0-8a64-b5267ee040e3-kube-api-access-r5sxf\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.131426 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-config-data\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.141228 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.141826 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-scripts\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.144896 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-config-data\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.152312 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5sxf\" (UniqueName: \"kubernetes.io/projected/94ccb677-c026-43d0-8a64-b5267ee040e3-kube-api-access-r5sxf\") pod \"nova-cell1-conductor-db-sync-hx64l\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: W1211 10:14:29.200822 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc9c32d3_f278_46f4_8c02_ddfdf905c39d.slice/crio-80250e7e983ae238348104ed6ae6ec34e35db7f4526c6cf417d42d552b349588 WatchSource:0}: Error finding container 80250e7e983ae238348104ed6ae6ec34e35db7f4526c6cf417d42d552b349588: Status 404 returned error can't find the container with id 80250e7e983ae238348104ed6ae6ec34e35db7f4526c6cf417d42d552b349588 Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.206011 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-5ngml"] Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.370659 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" event={"ID":"bc9c32d3-f278-46f4-8c02-ddfdf905c39d","Type":"ContainerStarted","Data":"80250e7e983ae238348104ed6ae6ec34e35db7f4526c6cf417d42d552b349588"} Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.377254 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zhfc5" event={"ID":"67204404-706d-4886-bb9d-ffa996f7bd90","Type":"ContainerStarted","Data":"eb098b66a9da4d07dec346ad9235a1cfd53ae43ab88fc6d48ec21336c2a14f03"} Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.377340 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zhfc5" event={"ID":"67204404-706d-4886-bb9d-ffa996f7bd90","Type":"ContainerStarted","Data":"6cfd3e4a743906faaf0639eeaa289df1685bc44b71f7e0c1263811d573ce9776"} Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.379952 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08a8ffb5-96db-4f37-aabb-8079f481a245","Type":"ContainerStarted","Data":"709da73521e1210c95edbdf5f9adf4b37c29962cce61e3c73b9345f5c1d530af"} Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.382453 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70c76243-c909-48db-849e-af02769440b5","Type":"ContainerStarted","Data":"d21cb8d1a72ff07bc970ac6c06e68c926829be153734d881e850d72285dcd012"} Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.388653 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"415eccfe-8760-4f2d-bcb1-ac8acc8429dc","Type":"ContainerStarted","Data":"ba9667ec13f02d3054ab2f053fddc89b8ff5ac6ca899824a7fed315b4d76eaa0"} Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.391160 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62a6777c-3846-4084-983e-431b9f62d870","Type":"ContainerStarted","Data":"77fbb49a32746e7c427c59bbad84ae9207019e54507c413e34debcfbd89bba52"} Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.398969 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-zhfc5" podStartSLOduration=2.398943455 podStartE2EDuration="2.398943455s" podCreationTimestamp="2025-12-11 10:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:29.39617207 +0000 UTC m=+1242.256035403" watchObservedRunningTime="2025-12-11 10:14:29.398943455 +0000 UTC m=+1242.258806758" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.450770 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.878262 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:14:29 crc kubenswrapper[4746]: I1211 10:14:29.878356 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:14:30 crc kubenswrapper[4746]: W1211 10:14:30.142722 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94ccb677_c026_43d0_8a64_b5267ee040e3.slice/crio-3e046f1b638153f1ea69356653a5767f59a68ecbec594c9ace76feb23511ebff WatchSource:0}: Error finding container 3e046f1b638153f1ea69356653a5767f59a68ecbec594c9ace76feb23511ebff: Status 404 returned error can't find the container with id 3e046f1b638153f1ea69356653a5767f59a68ecbec594c9ace76feb23511ebff Dec 11 10:14:30 crc kubenswrapper[4746]: I1211 10:14:30.169152 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hx64l"] Dec 11 10:14:30 crc kubenswrapper[4746]: I1211 10:14:30.419151 4746 generic.go:334] "Generic (PLEG): container finished" podID="bc9c32d3-f278-46f4-8c02-ddfdf905c39d" containerID="399cf3be6ea198b1d59d3e3e8e604bc646c056b9d290dfb6d68e5c85badba28c" exitCode=0 Dec 11 10:14:30 crc kubenswrapper[4746]: I1211 10:14:30.419281 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" event={"ID":"bc9c32d3-f278-46f4-8c02-ddfdf905c39d","Type":"ContainerDied","Data":"399cf3be6ea198b1d59d3e3e8e604bc646c056b9d290dfb6d68e5c85badba28c"} Dec 11 10:14:30 crc kubenswrapper[4746]: I1211 10:14:30.422316 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hx64l" event={"ID":"94ccb677-c026-43d0-8a64-b5267ee040e3","Type":"ContainerStarted","Data":"3e046f1b638153f1ea69356653a5767f59a68ecbec594c9ace76feb23511ebff"} Dec 11 10:14:31 crc kubenswrapper[4746]: I1211 10:14:31.293797 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 10:14:31 crc kubenswrapper[4746]: I1211 10:14:31.476955 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:14:31 crc kubenswrapper[4746]: I1211 10:14:31.492474 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" event={"ID":"bc9c32d3-f278-46f4-8c02-ddfdf905c39d","Type":"ContainerStarted","Data":"d10ff6167d55fa9ca3754f1f5a9dea0f08757f6cc78184bc59958d24716a8778"} Dec 11 10:14:31 crc kubenswrapper[4746]: I1211 10:14:31.492616 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:31 crc kubenswrapper[4746]: I1211 10:14:31.502190 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hx64l" event={"ID":"94ccb677-c026-43d0-8a64-b5267ee040e3","Type":"ContainerStarted","Data":"d5e6c802dab5139d15be3abb99132f22bf2e707a3b49fc27c041e4359c1e3d2b"} Dec 11 10:14:31 crc kubenswrapper[4746]: I1211 10:14:31.556453 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" podStartSLOduration=4.556416436 podStartE2EDuration="4.556416436s" podCreationTimestamp="2025-12-11 10:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:31.537473818 +0000 UTC m=+1244.397337151" watchObservedRunningTime="2025-12-11 10:14:31.556416436 +0000 UTC m=+1244.416279749" Dec 11 10:14:31 crc kubenswrapper[4746]: I1211 10:14:31.584252 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:14:31 crc kubenswrapper[4746]: I1211 10:14:31.595925 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-hx64l" podStartSLOduration=3.595891095 podStartE2EDuration="3.595891095s" podCreationTimestamp="2025-12-11 10:14:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:31.560624789 +0000 UTC m=+1244.420488112" watchObservedRunningTime="2025-12-11 10:14:31.595891095 +0000 UTC m=+1244.455754408" Dec 11 10:14:35 crc kubenswrapper[4746]: I1211 10:14:35.562678 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62a6777c-3846-4084-983e-431b9f62d870","Type":"ContainerStarted","Data":"d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3"} Dec 11 10:14:35 crc kubenswrapper[4746]: I1211 10:14:35.567892 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08a8ffb5-96db-4f37-aabb-8079f481a245","Type":"ContainerStarted","Data":"60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c"} Dec 11 10:14:35 crc kubenswrapper[4746]: I1211 10:14:35.568083 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="08a8ffb5-96db-4f37-aabb-8079f481a245" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c" gracePeriod=30 Dec 11 10:14:35 crc kubenswrapper[4746]: I1211 10:14:35.579011 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"415eccfe-8760-4f2d-bcb1-ac8acc8429dc","Type":"ContainerStarted","Data":"8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9"} Dec 11 10:14:35 crc kubenswrapper[4746]: I1211 10:14:35.605438 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.30188461 podStartE2EDuration="8.605407127s" podCreationTimestamp="2025-12-11 10:14:27 +0000 UTC" firstStartedPulling="2025-12-11 10:14:28.931890454 +0000 UTC m=+1241.791753767" lastFinishedPulling="2025-12-11 10:14:34.235412971 +0000 UTC m=+1247.095276284" observedRunningTime="2025-12-11 10:14:35.596390715 +0000 UTC m=+1248.456254028" watchObservedRunningTime="2025-12-11 10:14:35.605407127 +0000 UTC m=+1248.465270440" Dec 11 10:14:35 crc kubenswrapper[4746]: I1211 10:14:35.624347 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.197505449 podStartE2EDuration="8.624298455s" podCreationTimestamp="2025-12-11 10:14:27 +0000 UTC" firstStartedPulling="2025-12-11 10:14:28.800183068 +0000 UTC m=+1241.660046381" lastFinishedPulling="2025-12-11 10:14:34.226976064 +0000 UTC m=+1247.086839387" observedRunningTime="2025-12-11 10:14:35.614777929 +0000 UTC m=+1248.474641252" watchObservedRunningTime="2025-12-11 10:14:35.624298455 +0000 UTC m=+1248.484161768" Dec 11 10:14:36 crc kubenswrapper[4746]: I1211 10:14:36.597767 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70c76243-c909-48db-849e-af02769440b5","Type":"ContainerStarted","Data":"87abe561ca8bb003f1d56b9682b84ecd444485d1a34aa4de672fb5a9aa400dae"} Dec 11 10:14:36 crc kubenswrapper[4746]: I1211 10:14:36.598157 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70c76243-c909-48db-849e-af02769440b5","Type":"ContainerStarted","Data":"5085ddb8bdb275016fe498556a08e8c06cc1c54191f885e1e49aacf6614bcc6e"} Dec 11 10:14:36 crc kubenswrapper[4746]: I1211 10:14:36.598031 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="70c76243-c909-48db-849e-af02769440b5" containerName="nova-metadata-log" containerID="cri-o://87abe561ca8bb003f1d56b9682b84ecd444485d1a34aa4de672fb5a9aa400dae" gracePeriod=30 Dec 11 10:14:36 crc kubenswrapper[4746]: I1211 10:14:36.598270 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="70c76243-c909-48db-849e-af02769440b5" containerName="nova-metadata-metadata" containerID="cri-o://5085ddb8bdb275016fe498556a08e8c06cc1c54191f885e1e49aacf6614bcc6e" gracePeriod=30 Dec 11 10:14:36 crc kubenswrapper[4746]: I1211 10:14:36.620337 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.309120756 podStartE2EDuration="9.620310148s" podCreationTimestamp="2025-12-11 10:14:27 +0000 UTC" firstStartedPulling="2025-12-11 10:14:28.915549065 +0000 UTC m=+1241.775412378" lastFinishedPulling="2025-12-11 10:14:34.226738457 +0000 UTC m=+1247.086601770" observedRunningTime="2025-12-11 10:14:36.614608115 +0000 UTC m=+1249.474471428" watchObservedRunningTime="2025-12-11 10:14:36.620310148 +0000 UTC m=+1249.480173461" Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.015345 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.016138 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6f850dc5-ff1d-4e1e-a8ac-74fac0011d66" containerName="kube-state-metrics" containerID="cri-o://b2012f35a5122aa38fd256a3f8f8da68e7c2bf68069f2b62a93a5e85c00f5bdb" gracePeriod=30 Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.624712 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62a6777c-3846-4084-983e-431b9f62d870","Type":"ContainerStarted","Data":"d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052"} Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.658428 4746 generic.go:334] "Generic (PLEG): container finished" podID="6f850dc5-ff1d-4e1e-a8ac-74fac0011d66" containerID="b2012f35a5122aa38fd256a3f8f8da68e7c2bf68069f2b62a93a5e85c00f5bdb" exitCode=2 Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.667799 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=5.53455405 podStartE2EDuration="10.667750483s" podCreationTimestamp="2025-12-11 10:14:27 +0000 UTC" firstStartedPulling="2025-12-11 10:14:29.133843117 +0000 UTC m=+1241.993706430" lastFinishedPulling="2025-12-11 10:14:34.26703955 +0000 UTC m=+1247.126902863" observedRunningTime="2025-12-11 10:14:37.654330132 +0000 UTC m=+1250.514193455" watchObservedRunningTime="2025-12-11 10:14:37.667750483 +0000 UTC m=+1250.527613786" Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.669214 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6f850dc5-ff1d-4e1e-a8ac-74fac0011d66","Type":"ContainerDied","Data":"b2012f35a5122aa38fd256a3f8f8da68e7c2bf68069f2b62a93a5e85c00f5bdb"} Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.669256 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6f850dc5-ff1d-4e1e-a8ac-74fac0011d66","Type":"ContainerDied","Data":"6d26c49465379e8ffb35ff4e54d49dbe3cef39a0901828d07494db9726e83306"} Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.669275 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d26c49465379e8ffb35ff4e54d49dbe3cef39a0901828d07494db9726e83306" Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.674023 4746 generic.go:334] "Generic (PLEG): container finished" podID="70c76243-c909-48db-849e-af02769440b5" containerID="87abe561ca8bb003f1d56b9682b84ecd444485d1a34aa4de672fb5a9aa400dae" exitCode=143 Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.674114 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70c76243-c909-48db-849e-af02769440b5","Type":"ContainerDied","Data":"87abe561ca8bb003f1d56b9682b84ecd444485d1a34aa4de672fb5a9aa400dae"} Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.709254 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.715159 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j77q\" (UniqueName: \"kubernetes.io/projected/6f850dc5-ff1d-4e1e-a8ac-74fac0011d66-kube-api-access-2j77q\") pod \"6f850dc5-ff1d-4e1e-a8ac-74fac0011d66\" (UID: \"6f850dc5-ff1d-4e1e-a8ac-74fac0011d66\") " Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.734712 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f850dc5-ff1d-4e1e-a8ac-74fac0011d66-kube-api-access-2j77q" (OuterVolumeSpecName: "kube-api-access-2j77q") pod "6f850dc5-ff1d-4e1e-a8ac-74fac0011d66" (UID: "6f850dc5-ff1d-4e1e-a8ac-74fac0011d66"). InnerVolumeSpecName "kube-api-access-2j77q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.751769 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.818924 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j77q\" (UniqueName: \"kubernetes.io/projected/6f850dc5-ff1d-4e1e-a8ac-74fac0011d66-kube-api-access-2j77q\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.952283 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 10:14:37 crc kubenswrapper[4746]: I1211 10:14:37.952357 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:37.999787 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.026014 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.026137 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.392835 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.393392 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.413274 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.530228 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxczw"] Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.530631 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" podUID="f1d15816-c937-40de-971e-f13588eed4bc" containerName="dnsmasq-dns" containerID="cri-o://ab48afa002ac25205c0f2d14bec592360ae2ee2dba1c7424e504da0fb30e96fc" gracePeriod=10 Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.717017 4746 generic.go:334] "Generic (PLEG): container finished" podID="f1d15816-c937-40de-971e-f13588eed4bc" containerID="ab48afa002ac25205c0f2d14bec592360ae2ee2dba1c7424e504da0fb30e96fc" exitCode=0 Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.717095 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" event={"ID":"f1d15816-c937-40de-971e-f13588eed4bc","Type":"ContainerDied","Data":"ab48afa002ac25205c0f2d14bec592360ae2ee2dba1c7424e504da0fb30e96fc"} Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.718176 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.759614 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.822163 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.864331 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.902257 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:14:38 crc kubenswrapper[4746]: E1211 10:14:38.903093 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f850dc5-ff1d-4e1e-a8ac-74fac0011d66" containerName="kube-state-metrics" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.903122 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f850dc5-ff1d-4e1e-a8ac-74fac0011d66" containerName="kube-state-metrics" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.903513 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f850dc5-ff1d-4e1e-a8ac-74fac0011d66" containerName="kube-state-metrics" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.904612 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.910294 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.910848 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 11 10:14:38 crc kubenswrapper[4746]: I1211 10:14:38.945993 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.062024 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a12b0580-9910-43bc-ac49-bbb03f54211b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.062175 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/a12b0580-9910-43bc-ac49-bbb03f54211b-kube-api-access-nv6b2\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.062352 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12b0580-9910-43bc-ac49-bbb03f54211b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.062582 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a12b0580-9910-43bc-ac49-bbb03f54211b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.166594 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12b0580-9910-43bc-ac49-bbb03f54211b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.167176 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a12b0580-9910-43bc-ac49-bbb03f54211b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.167234 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a12b0580-9910-43bc-ac49-bbb03f54211b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.168201 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/a12b0580-9910-43bc-ac49-bbb03f54211b-kube-api-access-nv6b2\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.380539 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/a12b0580-9910-43bc-ac49-bbb03f54211b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.386360 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/a12b0580-9910-43bc-ac49-bbb03f54211b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.392322 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv6b2\" (UniqueName: \"kubernetes.io/projected/a12b0580-9910-43bc-ac49-bbb03f54211b-kube-api-access-nv6b2\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.440556 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12b0580-9910-43bc-ac49-bbb03f54211b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"a12b0580-9910-43bc-ac49-bbb03f54211b\") " pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.494697 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62a6777c-3846-4084-983e-431b9f62d870" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.495196 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="62a6777c-3846-4084-983e-431b9f62d870" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.512849 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.541752 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.663026 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f850dc5-ff1d-4e1e-a8ac-74fac0011d66" path="/var/lib/kubelet/pods/6f850dc5-ff1d-4e1e-a8ac-74fac0011d66/volumes" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.690386 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-sb\") pod \"f1d15816-c937-40de-971e-f13588eed4bc\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.690547 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-config\") pod \"f1d15816-c937-40de-971e-f13588eed4bc\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.690730 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-svc\") pod \"f1d15816-c937-40de-971e-f13588eed4bc\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.690867 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-swift-storage-0\") pod \"f1d15816-c937-40de-971e-f13588eed4bc\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.690968 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-nb\") pod \"f1d15816-c937-40de-971e-f13588eed4bc\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.691032 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcgth\" (UniqueName: \"kubernetes.io/projected/f1d15816-c937-40de-971e-f13588eed4bc-kube-api-access-wcgth\") pod \"f1d15816-c937-40de-971e-f13588eed4bc\" (UID: \"f1d15816-c937-40de-971e-f13588eed4bc\") " Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.718974 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d15816-c937-40de-971e-f13588eed4bc-kube-api-access-wcgth" (OuterVolumeSpecName: "kube-api-access-wcgth") pod "f1d15816-c937-40de-971e-f13588eed4bc" (UID: "f1d15816-c937-40de-971e-f13588eed4bc"). InnerVolumeSpecName "kube-api-access-wcgth". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.805078 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcgth\" (UniqueName: \"kubernetes.io/projected/f1d15816-c937-40de-971e-f13588eed4bc-kube-api-access-wcgth\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.846880 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f1d15816-c937-40de-971e-f13588eed4bc" (UID: "f1d15816-c937-40de-971e-f13588eed4bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.891181 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-config" (OuterVolumeSpecName: "config") pod "f1d15816-c937-40de-971e-f13588eed4bc" (UID: "f1d15816-c937-40de-971e-f13588eed4bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.892664 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.907569 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f1d15816-c937-40de-971e-f13588eed4bc" (UID: "f1d15816-c937-40de-971e-f13588eed4bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.912596 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.912634 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.912647 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:39 crc kubenswrapper[4746]: I1211 10:14:39.939286 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f1d15816-c937-40de-971e-f13588eed4bc" (UID: "f1d15816-c937-40de-971e-f13588eed4bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.000535 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f1d15816-c937-40de-971e-f13588eed4bc" (UID: "f1d15816-c937-40de-971e-f13588eed4bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.001927 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-pxczw" event={"ID":"f1d15816-c937-40de-971e-f13588eed4bc","Type":"ContainerDied","Data":"fb8c673ab98b92e65842fc70bcade4cdc11f821d82fb603198378199c1ac28fc"} Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.002019 4746 scope.go:117] "RemoveContainer" containerID="ab48afa002ac25205c0f2d14bec592360ae2ee2dba1c7424e504da0fb30e96fc" Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.035260 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.035331 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f1d15816-c937-40de-971e-f13588eed4bc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.085236 4746 scope.go:117] "RemoveContainer" containerID="8bc647c94a8355c0650d9b227730a450b75a93f80f67d8401dacc24cd469c2f9" Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.395815 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxczw"] Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.428006 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-pxczw"] Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.445885 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.858279 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.860944 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="ceilometer-central-agent" containerID="cri-o://dbc24545dc79d23fe04b3a406c21f6c0a47273a9aaaec467edadfb22e8d87b2c" gracePeriod=30 Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.861088 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="sg-core" containerID="cri-o://7ed8334587fa9a8fea0302b9651fe0c3bf0b5042ee5555c4b303ffd42cd7e46b" gracePeriod=30 Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.861137 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="ceilometer-notification-agent" containerID="cri-o://548c68026460c67b8190a66c98c406eab44ff1c5750e333f55f1f4ca50b1d0cf" gracePeriod=30 Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.861091 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="proxy-httpd" containerID="cri-o://0d6590a60d1d4dee9351b988e20be31f1233011703e393311a3e4448498bf22e" gracePeriod=30 Dec 11 10:14:40 crc kubenswrapper[4746]: I1211 10:14:40.904165 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a12b0580-9910-43bc-ac49-bbb03f54211b","Type":"ContainerStarted","Data":"31713a2ce387af93ab5926fbb1b3d4ccbf763d2eccd8aa2d80cfc4ad25f7e3c2"} Dec 11 10:14:41 crc kubenswrapper[4746]: I1211 10:14:41.650300 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1d15816-c937-40de-971e-f13588eed4bc" path="/var/lib/kubelet/pods/f1d15816-c937-40de-971e-f13588eed4bc/volumes" Dec 11 10:14:41 crc kubenswrapper[4746]: I1211 10:14:41.919101 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zhfc5" event={"ID":"67204404-706d-4886-bb9d-ffa996f7bd90","Type":"ContainerDied","Data":"eb098b66a9da4d07dec346ad9235a1cfd53ae43ab88fc6d48ec21336c2a14f03"} Dec 11 10:14:41 crc kubenswrapper[4746]: I1211 10:14:41.919017 4746 generic.go:334] "Generic (PLEG): container finished" podID="67204404-706d-4886-bb9d-ffa996f7bd90" containerID="eb098b66a9da4d07dec346ad9235a1cfd53ae43ab88fc6d48ec21336c2a14f03" exitCode=0 Dec 11 10:14:41 crc kubenswrapper[4746]: I1211 10:14:41.923670 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerID="0d6590a60d1d4dee9351b988e20be31f1233011703e393311a3e4448498bf22e" exitCode=0 Dec 11 10:14:41 crc kubenswrapper[4746]: I1211 10:14:41.923713 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerID="7ed8334587fa9a8fea0302b9651fe0c3bf0b5042ee5555c4b303ffd42cd7e46b" exitCode=2 Dec 11 10:14:41 crc kubenswrapper[4746]: I1211 10:14:41.923733 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerID="dbc24545dc79d23fe04b3a406c21f6c0a47273a9aaaec467edadfb22e8d87b2c" exitCode=0 Dec 11 10:14:41 crc kubenswrapper[4746]: I1211 10:14:41.923767 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b02cf3-c616-4a16-b988-ff8d42abb3ba","Type":"ContainerDied","Data":"0d6590a60d1d4dee9351b988e20be31f1233011703e393311a3e4448498bf22e"} Dec 11 10:14:41 crc kubenswrapper[4746]: I1211 10:14:41.923796 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b02cf3-c616-4a16-b988-ff8d42abb3ba","Type":"ContainerDied","Data":"7ed8334587fa9a8fea0302b9651fe0c3bf0b5042ee5555c4b303ffd42cd7e46b"} Dec 11 10:14:41 crc kubenswrapper[4746]: I1211 10:14:41.923818 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b02cf3-c616-4a16-b988-ff8d42abb3ba","Type":"ContainerDied","Data":"dbc24545dc79d23fe04b3a406c21f6c0a47273a9aaaec467edadfb22e8d87b2c"} Dec 11 10:14:42 crc kubenswrapper[4746]: I1211 10:14:42.995504 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a12b0580-9910-43bc-ac49-bbb03f54211b","Type":"ContainerStarted","Data":"9bec8c2a51410b0ef62c50b56cae9c7685bd4802432ace8230ba19fb4bdf9d86"} Dec 11 10:14:42 crc kubenswrapper[4746]: I1211 10:14:42.995942 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.033451 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerID="548c68026460c67b8190a66c98c406eab44ff1c5750e333f55f1f4ca50b1d0cf" exitCode=0 Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.033768 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b02cf3-c616-4a16-b988-ff8d42abb3ba","Type":"ContainerDied","Data":"548c68026460c67b8190a66c98c406eab44ff1c5750e333f55f1f4ca50b1d0cf"} Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.037238 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.487764386 podStartE2EDuration="5.03720776s" podCreationTimestamp="2025-12-11 10:14:38 +0000 UTC" firstStartedPulling="2025-12-11 10:14:40.431847893 +0000 UTC m=+1253.291711206" lastFinishedPulling="2025-12-11 10:14:41.981291267 +0000 UTC m=+1254.841154580" observedRunningTime="2025-12-11 10:14:43.01377048 +0000 UTC m=+1255.873633803" watchObservedRunningTime="2025-12-11 10:14:43.03720776 +0000 UTC m=+1255.897071073" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.568465 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.576220 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722322 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-config-data\") pod \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722381 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-config-data\") pod \"67204404-706d-4886-bb9d-ffa996f7bd90\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722405 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-scripts\") pod \"67204404-706d-4886-bb9d-ffa996f7bd90\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722483 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-scripts\") pod \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722547 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-combined-ca-bundle\") pod \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722606 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-sg-core-conf-yaml\") pod \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722651 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c92k\" (UniqueName: \"kubernetes.io/projected/67204404-706d-4886-bb9d-ffa996f7bd90-kube-api-access-5c92k\") pod \"67204404-706d-4886-bb9d-ffa996f7bd90\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722822 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-combined-ca-bundle\") pod \"67204404-706d-4886-bb9d-ffa996f7bd90\" (UID: \"67204404-706d-4886-bb9d-ffa996f7bd90\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722875 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-run-httpd\") pod \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722918 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wjgl\" (UniqueName: \"kubernetes.io/projected/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-kube-api-access-7wjgl\") pod \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.722965 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-log-httpd\") pod \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\" (UID: \"a9b02cf3-c616-4a16-b988-ff8d42abb3ba\") " Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.724982 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a9b02cf3-c616-4a16-b988-ff8d42abb3ba" (UID: "a9b02cf3-c616-4a16-b988-ff8d42abb3ba"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.725509 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a9b02cf3-c616-4a16-b988-ff8d42abb3ba" (UID: "a9b02cf3-c616-4a16-b988-ff8d42abb3ba"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.730161 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-scripts" (OuterVolumeSpecName: "scripts") pod "a9b02cf3-c616-4a16-b988-ff8d42abb3ba" (UID: "a9b02cf3-c616-4a16-b988-ff8d42abb3ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.730296 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-scripts" (OuterVolumeSpecName: "scripts") pod "67204404-706d-4886-bb9d-ffa996f7bd90" (UID: "67204404-706d-4886-bb9d-ffa996f7bd90"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.732589 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-kube-api-access-7wjgl" (OuterVolumeSpecName: "kube-api-access-7wjgl") pod "a9b02cf3-c616-4a16-b988-ff8d42abb3ba" (UID: "a9b02cf3-c616-4a16-b988-ff8d42abb3ba"). InnerVolumeSpecName "kube-api-access-7wjgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.746305 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67204404-706d-4886-bb9d-ffa996f7bd90-kube-api-access-5c92k" (OuterVolumeSpecName: "kube-api-access-5c92k") pod "67204404-706d-4886-bb9d-ffa996f7bd90" (UID: "67204404-706d-4886-bb9d-ffa996f7bd90"). InnerVolumeSpecName "kube-api-access-5c92k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.763128 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a9b02cf3-c616-4a16-b988-ff8d42abb3ba" (UID: "a9b02cf3-c616-4a16-b988-ff8d42abb3ba"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.769364 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67204404-706d-4886-bb9d-ffa996f7bd90" (UID: "67204404-706d-4886-bb9d-ffa996f7bd90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.773748 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-config-data" (OuterVolumeSpecName: "config-data") pod "67204404-706d-4886-bb9d-ffa996f7bd90" (UID: "67204404-706d-4886-bb9d-ffa996f7bd90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.822798 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9b02cf3-c616-4a16-b988-ff8d42abb3ba" (UID: "a9b02cf3-c616-4a16-b988-ff8d42abb3ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.825326 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.826129 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.826238 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wjgl\" (UniqueName: \"kubernetes.io/projected/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-kube-api-access-7wjgl\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.826353 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.826442 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.826529 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.826597 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.826669 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.826746 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c92k\" (UniqueName: \"kubernetes.io/projected/67204404-706d-4886-bb9d-ffa996f7bd90-kube-api-access-5c92k\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.826812 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67204404-706d-4886-bb9d-ffa996f7bd90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.852373 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-config-data" (OuterVolumeSpecName: "config-data") pod "a9b02cf3-c616-4a16-b988-ff8d42abb3ba" (UID: "a9b02cf3-c616-4a16-b988-ff8d42abb3ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:43 crc kubenswrapper[4746]: I1211 10:14:43.928825 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b02cf3-c616-4a16-b988-ff8d42abb3ba-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.063134 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a9b02cf3-c616-4a16-b988-ff8d42abb3ba","Type":"ContainerDied","Data":"a83dfaa61f16a521446a14969e619a9f12e8c1a34e007663242cfa3bc8f9de08"} Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.063256 4746 scope.go:117] "RemoveContainer" containerID="0d6590a60d1d4dee9351b988e20be31f1233011703e393311a3e4448498bf22e" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.063605 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.072819 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-zhfc5" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.072795 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-zhfc5" event={"ID":"67204404-706d-4886-bb9d-ffa996f7bd90","Type":"ContainerDied","Data":"6cfd3e4a743906faaf0639eeaa289df1685bc44b71f7e0c1263811d573ce9776"} Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.072890 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cfd3e4a743906faaf0639eeaa289df1685bc44b71f7e0c1263811d573ce9776" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.125772 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.135744 4746 scope.go:117] "RemoveContainer" containerID="7ed8334587fa9a8fea0302b9651fe0c3bf0b5042ee5555c4b303ffd42cd7e46b" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.140294 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.161821 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:44 crc kubenswrapper[4746]: E1211 10:14:44.165010 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67204404-706d-4886-bb9d-ffa996f7bd90" containerName="nova-manage" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165064 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="67204404-706d-4886-bb9d-ffa996f7bd90" containerName="nova-manage" Dec 11 10:14:44 crc kubenswrapper[4746]: E1211 10:14:44.165098 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="sg-core" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165106 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="sg-core" Dec 11 10:14:44 crc kubenswrapper[4746]: E1211 10:14:44.165121 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="ceilometer-notification-agent" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165127 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="ceilometer-notification-agent" Dec 11 10:14:44 crc kubenswrapper[4746]: E1211 10:14:44.165138 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d15816-c937-40de-971e-f13588eed4bc" containerName="dnsmasq-dns" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165144 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d15816-c937-40de-971e-f13588eed4bc" containerName="dnsmasq-dns" Dec 11 10:14:44 crc kubenswrapper[4746]: E1211 10:14:44.165163 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d15816-c937-40de-971e-f13588eed4bc" containerName="init" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165169 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d15816-c937-40de-971e-f13588eed4bc" containerName="init" Dec 11 10:14:44 crc kubenswrapper[4746]: E1211 10:14:44.165176 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="ceilometer-central-agent" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165182 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="ceilometer-central-agent" Dec 11 10:14:44 crc kubenswrapper[4746]: E1211 10:14:44.165204 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="proxy-httpd" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165211 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="proxy-httpd" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165506 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="ceilometer-central-agent" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165528 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="proxy-httpd" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165545 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="ceilometer-notification-agent" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165577 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="67204404-706d-4886-bb9d-ffa996f7bd90" containerName="nova-manage" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165592 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" containerName="sg-core" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.165634 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d15816-c937-40de-971e-f13588eed4bc" containerName="dnsmasq-dns" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.170199 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.173791 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.174096 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.174871 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.177717 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.196586 4746 scope.go:117] "RemoveContainer" containerID="548c68026460c67b8190a66c98c406eab44ff1c5750e333f55f1f4ca50b1d0cf" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.239321 4746 scope.go:117] "RemoveContainer" containerID="dbc24545dc79d23fe04b3a406c21f6c0a47273a9aaaec467edadfb22e8d87b2c" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.329162 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.329682 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="62a6777c-3846-4084-983e-431b9f62d870" containerName="nova-api-log" containerID="cri-o://d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3" gracePeriod=30 Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.329822 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="62a6777c-3846-4084-983e-431b9f62d870" containerName="nova-api-api" containerID="cri-o://d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052" gracePeriod=30 Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.341072 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-run-httpd\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.341126 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.341173 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.341210 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gff5g\" (UniqueName: \"kubernetes.io/projected/5e726807-c33c-4d85-9165-5e7646b0d813-kube-api-access-gff5g\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.341239 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.341280 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-scripts\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.341322 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-config-data\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.341368 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-log-httpd\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.348248 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.348565 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="415eccfe-8760-4f2d-bcb1-ac8acc8429dc" containerName="nova-scheduler-scheduler" containerID="cri-o://8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9" gracePeriod=30 Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.447759 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-run-httpd\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.447826 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.447882 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.447915 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gff5g\" (UniqueName: \"kubernetes.io/projected/5e726807-c33c-4d85-9165-5e7646b0d813-kube-api-access-gff5g\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.447941 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.447973 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-scripts\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.448012 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-config-data\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.448080 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-log-httpd\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.448703 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-log-httpd\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.448994 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-run-httpd\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.456448 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.458610 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-config-data\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.459632 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-scripts\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.461758 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.465466 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.472813 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gff5g\" (UniqueName: \"kubernetes.io/projected/5e726807-c33c-4d85-9165-5e7646b0d813-kube-api-access-gff5g\") pod \"ceilometer-0\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " pod="openstack/ceilometer-0" Dec 11 10:14:44 crc kubenswrapper[4746]: I1211 10:14:44.519461 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:14:45 crc kubenswrapper[4746]: I1211 10:14:45.086028 4746 generic.go:334] "Generic (PLEG): container finished" podID="62a6777c-3846-4084-983e-431b9f62d870" containerID="d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3" exitCode=143 Dec 11 10:14:45 crc kubenswrapper[4746]: I1211 10:14:45.086439 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62a6777c-3846-4084-983e-431b9f62d870","Type":"ContainerDied","Data":"d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3"} Dec 11 10:14:45 crc kubenswrapper[4746]: I1211 10:14:45.331529 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:14:45 crc kubenswrapper[4746]: I1211 10:14:45.644843 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b02cf3-c616-4a16-b988-ff8d42abb3ba" path="/var/lib/kubelet/pods/a9b02cf3-c616-4a16-b988-ff8d42abb3ba/volumes" Dec 11 10:14:46 crc kubenswrapper[4746]: I1211 10:14:46.104382 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e726807-c33c-4d85-9165-5e7646b0d813","Type":"ContainerStarted","Data":"5d487a1e21a4a8d0540c62b6e5401049df85ab1744e35720f13639761f45ec6a"} Dec 11 10:14:47 crc kubenswrapper[4746]: I1211 10:14:47.117976 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e726807-c33c-4d85-9165-5e7646b0d813","Type":"ContainerStarted","Data":"d432af5db78935cd70aa39cd729eef869d509f0d91565a035720ff344b1f8881"} Dec 11 10:14:47 crc kubenswrapper[4746]: E1211 10:14:47.957728 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:14:47 crc kubenswrapper[4746]: E1211 10:14:47.960359 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:14:47 crc kubenswrapper[4746]: E1211 10:14:47.966618 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:14:47 crc kubenswrapper[4746]: E1211 10:14:47.967166 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="415eccfe-8760-4f2d-bcb1-ac8acc8429dc" containerName="nova-scheduler-scheduler" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.102590 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.135508 4746 generic.go:334] "Generic (PLEG): container finished" podID="62a6777c-3846-4084-983e-431b9f62d870" containerID="d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052" exitCode=0 Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.135596 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.135654 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62a6777c-3846-4084-983e-431b9f62d870","Type":"ContainerDied","Data":"d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052"} Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.135762 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"62a6777c-3846-4084-983e-431b9f62d870","Type":"ContainerDied","Data":"77fbb49a32746e7c427c59bbad84ae9207019e54507c413e34debcfbd89bba52"} Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.135801 4746 scope.go:117] "RemoveContainer" containerID="d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.140784 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e726807-c33c-4d85-9165-5e7646b0d813","Type":"ContainerStarted","Data":"aa62ee80f757f09799759ee272c8eb7e44293fb3646eefd6c716d24921b4bb3e"} Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.173358 4746 scope.go:117] "RemoveContainer" containerID="d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.197878 4746 scope.go:117] "RemoveContainer" containerID="d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052" Dec 11 10:14:48 crc kubenswrapper[4746]: E1211 10:14:48.199027 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052\": container with ID starting with d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052 not found: ID does not exist" containerID="d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.199143 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052"} err="failed to get container status \"d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052\": rpc error: code = NotFound desc = could not find container \"d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052\": container with ID starting with d405ca4162ed3c6d8350093932d5173c97b7b7363cb6aadbbf2a62f9a76c9052 not found: ID does not exist" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.199184 4746 scope.go:117] "RemoveContainer" containerID="d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3" Dec 11 10:14:48 crc kubenswrapper[4746]: E1211 10:14:48.199707 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3\": container with ID starting with d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3 not found: ID does not exist" containerID="d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.199762 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3"} err="failed to get container status \"d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3\": rpc error: code = NotFound desc = could not find container \"d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3\": container with ID starting with d9a4b27a1f266eeef4420ca93d899eac45809115efd211dfd317b7371d0ca4b3 not found: ID does not exist" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.206247 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-combined-ca-bundle\") pod \"62a6777c-3846-4084-983e-431b9f62d870\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.206311 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gbq9\" (UniqueName: \"kubernetes.io/projected/62a6777c-3846-4084-983e-431b9f62d870-kube-api-access-9gbq9\") pod \"62a6777c-3846-4084-983e-431b9f62d870\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.206682 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-config-data\") pod \"62a6777c-3846-4084-983e-431b9f62d870\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.206766 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62a6777c-3846-4084-983e-431b9f62d870-logs\") pod \"62a6777c-3846-4084-983e-431b9f62d870\" (UID: \"62a6777c-3846-4084-983e-431b9f62d870\") " Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.207092 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a6777c-3846-4084-983e-431b9f62d870-logs" (OuterVolumeSpecName: "logs") pod "62a6777c-3846-4084-983e-431b9f62d870" (UID: "62a6777c-3846-4084-983e-431b9f62d870"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.207433 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/62a6777c-3846-4084-983e-431b9f62d870-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.213080 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a6777c-3846-4084-983e-431b9f62d870-kube-api-access-9gbq9" (OuterVolumeSpecName: "kube-api-access-9gbq9") pod "62a6777c-3846-4084-983e-431b9f62d870" (UID: "62a6777c-3846-4084-983e-431b9f62d870"). InnerVolumeSpecName "kube-api-access-9gbq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.247501 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-config-data" (OuterVolumeSpecName: "config-data") pod "62a6777c-3846-4084-983e-431b9f62d870" (UID: "62a6777c-3846-4084-983e-431b9f62d870"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.248241 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "62a6777c-3846-4084-983e-431b9f62d870" (UID: "62a6777c-3846-4084-983e-431b9f62d870"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.309691 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.309973 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gbq9\" (UniqueName: \"kubernetes.io/projected/62a6777c-3846-4084-983e-431b9f62d870-kube-api-access-9gbq9\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.310066 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/62a6777c-3846-4084-983e-431b9f62d870-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.593883 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.613194 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.642556 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 10:14:48 crc kubenswrapper[4746]: E1211 10:14:48.643280 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a6777c-3846-4084-983e-431b9f62d870" containerName="nova-api-api" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.643301 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a6777c-3846-4084-983e-431b9f62d870" containerName="nova-api-api" Dec 11 10:14:48 crc kubenswrapper[4746]: E1211 10:14:48.643356 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a6777c-3846-4084-983e-431b9f62d870" containerName="nova-api-log" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.643365 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a6777c-3846-4084-983e-431b9f62d870" containerName="nova-api-log" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.643702 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a6777c-3846-4084-983e-431b9f62d870" containerName="nova-api-api" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.643774 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a6777c-3846-4084-983e-431b9f62d870" containerName="nova-api-log" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.646482 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.650712 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.673389 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.828901 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-config-data\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.829444 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf9w5\" (UniqueName: \"kubernetes.io/projected/a15a9087-40ab-474e-802c-8f5e838abd15-kube-api-access-wf9w5\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.829598 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15a9087-40ab-474e-802c-8f5e838abd15-logs\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.829753 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.898757 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.931471 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf9w5\" (UniqueName: \"kubernetes.io/projected/a15a9087-40ab-474e-802c-8f5e838abd15-kube-api-access-wf9w5\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.931540 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15a9087-40ab-474e-802c-8f5e838abd15-logs\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.931616 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.931686 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-config-data\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.932340 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15a9087-40ab-474e-802c-8f5e838abd15-logs\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.952796 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.961912 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-config-data\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:48 crc kubenswrapper[4746]: I1211 10:14:48.969307 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf9w5\" (UniqueName: \"kubernetes.io/projected/a15a9087-40ab-474e-802c-8f5e838abd15-kube-api-access-wf9w5\") pod \"nova-api-0\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " pod="openstack/nova-api-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.036421 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-combined-ca-bundle\") pod \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.036722 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-config-data\") pod \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.037127 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdbgc\" (UniqueName: \"kubernetes.io/projected/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-kube-api-access-cdbgc\") pod \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\" (UID: \"415eccfe-8760-4f2d-bcb1-ac8acc8429dc\") " Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.048245 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-kube-api-access-cdbgc" (OuterVolumeSpecName: "kube-api-access-cdbgc") pod "415eccfe-8760-4f2d-bcb1-ac8acc8429dc" (UID: "415eccfe-8760-4f2d-bcb1-ac8acc8429dc"). InnerVolumeSpecName "kube-api-access-cdbgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.075091 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-config-data" (OuterVolumeSpecName: "config-data") pod "415eccfe-8760-4f2d-bcb1-ac8acc8429dc" (UID: "415eccfe-8760-4f2d-bcb1-ac8acc8429dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.078158 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "415eccfe-8760-4f2d-bcb1-ac8acc8429dc" (UID: "415eccfe-8760-4f2d-bcb1-ac8acc8429dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.141190 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.141240 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdbgc\" (UniqueName: \"kubernetes.io/projected/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-kube-api-access-cdbgc\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.141256 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/415eccfe-8760-4f2d-bcb1-ac8acc8429dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.155774 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e726807-c33c-4d85-9165-5e7646b0d813","Type":"ContainerStarted","Data":"f612d6c58f9a2e937bb8175f4eb4b30f92f48ec22494ca3a2a721d0995240d37"} Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.159011 4746 generic.go:334] "Generic (PLEG): container finished" podID="94ccb677-c026-43d0-8a64-b5267ee040e3" containerID="d5e6c802dab5139d15be3abb99132f22bf2e707a3b49fc27c041e4359c1e3d2b" exitCode=0 Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.159082 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hx64l" event={"ID":"94ccb677-c026-43d0-8a64-b5267ee040e3","Type":"ContainerDied","Data":"d5e6c802dab5139d15be3abb99132f22bf2e707a3b49fc27c041e4359c1e3d2b"} Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.165856 4746 generic.go:334] "Generic (PLEG): container finished" podID="415eccfe-8760-4f2d-bcb1-ac8acc8429dc" containerID="8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9" exitCode=0 Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.165926 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"415eccfe-8760-4f2d-bcb1-ac8acc8429dc","Type":"ContainerDied","Data":"8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9"} Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.165970 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"415eccfe-8760-4f2d-bcb1-ac8acc8429dc","Type":"ContainerDied","Data":"ba9667ec13f02d3054ab2f053fddc89b8ff5ac6ca899824a7fed315b4d76eaa0"} Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.165994 4746 scope.go:117] "RemoveContainer" containerID="8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.166527 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.194694 4746 scope.go:117] "RemoveContainer" containerID="8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9" Dec 11 10:14:49 crc kubenswrapper[4746]: E1211 10:14:49.195658 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9\": container with ID starting with 8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9 not found: ID does not exist" containerID="8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.195805 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9"} err="failed to get container status \"8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9\": rpc error: code = NotFound desc = could not find container \"8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9\": container with ID starting with 8441f7a3f8832ee0909f014e0f29e3f069c3b922ff22ff2c8f7b9d93d549cdc9 not found: ID does not exist" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.199497 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.221199 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.231197 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.252493 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:14:49 crc kubenswrapper[4746]: E1211 10:14:49.253041 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415eccfe-8760-4f2d-bcb1-ac8acc8429dc" containerName="nova-scheduler-scheduler" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.253081 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="415eccfe-8760-4f2d-bcb1-ac8acc8429dc" containerName="nova-scheduler-scheduler" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.253298 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="415eccfe-8760-4f2d-bcb1-ac8acc8429dc" containerName="nova-scheduler-scheduler" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.254244 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.267817 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.272864 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.348576 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tfq5\" (UniqueName: \"kubernetes.io/projected/a9b013e1-4fb8-40ff-a895-a21fff60b543-kube-api-access-6tfq5\") pod \"nova-scheduler-0\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.349001 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.349205 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-config-data\") pod \"nova-scheduler-0\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.453505 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.455400 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-config-data\") pod \"nova-scheduler-0\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.455722 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tfq5\" (UniqueName: \"kubernetes.io/projected/a9b013e1-4fb8-40ff-a895-a21fff60b543-kube-api-access-6tfq5\") pod \"nova-scheduler-0\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.468824 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.470001 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-config-data\") pod \"nova-scheduler-0\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.476632 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tfq5\" (UniqueName: \"kubernetes.io/projected/a9b013e1-4fb8-40ff-a895-a21fff60b543-kube-api-access-6tfq5\") pod \"nova-scheduler-0\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.556455 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.585487 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.683311 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="415eccfe-8760-4f2d-bcb1-ac8acc8429dc" path="/var/lib/kubelet/pods/415eccfe-8760-4f2d-bcb1-ac8acc8429dc/volumes" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.684268 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a6777c-3846-4084-983e-431b9f62d870" path="/var/lib/kubelet/pods/62a6777c-3846-4084-983e-431b9f62d870/volumes" Dec 11 10:14:49 crc kubenswrapper[4746]: I1211 10:14:49.790600 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:14:50 crc kubenswrapper[4746]: I1211 10:14:50.185151 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a15a9087-40ab-474e-802c-8f5e838abd15","Type":"ContainerStarted","Data":"6dce7b3e3827d91044d5c21f17a7b17876f7c53cd781fa4ad54eab29d39d5b50"} Dec 11 10:14:50 crc kubenswrapper[4746]: I1211 10:14:50.185776 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a15a9087-40ab-474e-802c-8f5e838abd15","Type":"ContainerStarted","Data":"1c4bb2e53a68ca924e544350bda232e455b43d080142c819439a271d5a28d36b"} Dec 11 10:14:50 crc kubenswrapper[4746]: I1211 10:14:50.235125 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:14:50 crc kubenswrapper[4746]: I1211 10:14:50.776566 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:50 crc kubenswrapper[4746]: I1211 10:14:50.912246 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5sxf\" (UniqueName: \"kubernetes.io/projected/94ccb677-c026-43d0-8a64-b5267ee040e3-kube-api-access-r5sxf\") pod \"94ccb677-c026-43d0-8a64-b5267ee040e3\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " Dec 11 10:14:50 crc kubenswrapper[4746]: I1211 10:14:50.912427 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-config-data\") pod \"94ccb677-c026-43d0-8a64-b5267ee040e3\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " Dec 11 10:14:50 crc kubenswrapper[4746]: I1211 10:14:50.912500 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-scripts\") pod \"94ccb677-c026-43d0-8a64-b5267ee040e3\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " Dec 11 10:14:50 crc kubenswrapper[4746]: I1211 10:14:50.912846 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-combined-ca-bundle\") pod \"94ccb677-c026-43d0-8a64-b5267ee040e3\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " Dec 11 10:14:50 crc kubenswrapper[4746]: I1211 10:14:50.925587 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-scripts" (OuterVolumeSpecName: "scripts") pod "94ccb677-c026-43d0-8a64-b5267ee040e3" (UID: "94ccb677-c026-43d0-8a64-b5267ee040e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:50 crc kubenswrapper[4746]: I1211 10:14:50.940341 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94ccb677-c026-43d0-8a64-b5267ee040e3-kube-api-access-r5sxf" (OuterVolumeSpecName: "kube-api-access-r5sxf") pod "94ccb677-c026-43d0-8a64-b5267ee040e3" (UID: "94ccb677-c026-43d0-8a64-b5267ee040e3"). InnerVolumeSpecName "kube-api-access-r5sxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.052093 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-config-data" (OuterVolumeSpecName: "config-data") pod "94ccb677-c026-43d0-8a64-b5267ee040e3" (UID: "94ccb677-c026-43d0-8a64-b5267ee040e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.053476 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-config-data\") pod \"94ccb677-c026-43d0-8a64-b5267ee040e3\" (UID: \"94ccb677-c026-43d0-8a64-b5267ee040e3\") " Dec 11 10:14:51 crc kubenswrapper[4746]: W1211 10:14:51.053617 4746 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/94ccb677-c026-43d0-8a64-b5267ee040e3/volumes/kubernetes.io~secret/config-data Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.053654 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-config-data" (OuterVolumeSpecName: "config-data") pod "94ccb677-c026-43d0-8a64-b5267ee040e3" (UID: "94ccb677-c026-43d0-8a64-b5267ee040e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.054983 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5sxf\" (UniqueName: \"kubernetes.io/projected/94ccb677-c026-43d0-8a64-b5267ee040e3-kube-api-access-r5sxf\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.055019 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.055030 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.057034 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94ccb677-c026-43d0-8a64-b5267ee040e3" (UID: "94ccb677-c026-43d0-8a64-b5267ee040e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.157623 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94ccb677-c026-43d0-8a64-b5267ee040e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.203373 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-hx64l" event={"ID":"94ccb677-c026-43d0-8a64-b5267ee040e3","Type":"ContainerDied","Data":"3e046f1b638153f1ea69356653a5767f59a68ecbec594c9ace76feb23511ebff"} Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.203453 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e046f1b638153f1ea69356653a5767f59a68ecbec594c9ace76feb23511ebff" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.203410 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-hx64l" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.205969 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a9b013e1-4fb8-40ff-a895-a21fff60b543","Type":"ContainerStarted","Data":"f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f"} Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.206146 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a9b013e1-4fb8-40ff-a895-a21fff60b543","Type":"ContainerStarted","Data":"bf9f26240b7970f1600d117d95d7232bc63dc5db33d5982c38fd32b4d4e442b9"} Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.210276 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a15a9087-40ab-474e-802c-8f5e838abd15","Type":"ContainerStarted","Data":"07f8f5de03ceea2461e85f3bdb936f90adbd0800888f38cf43545a9ed314f547"} Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.240222 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.24020067 podStartE2EDuration="2.24020067s" podCreationTimestamp="2025-12-11 10:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:51.239417469 +0000 UTC m=+1264.099280792" watchObservedRunningTime="2025-12-11 10:14:51.24020067 +0000 UTC m=+1264.100063993" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.297950 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.29791912 podStartE2EDuration="3.29791912s" podCreationTimestamp="2025-12-11 10:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:51.278872598 +0000 UTC m=+1264.138735921" watchObservedRunningTime="2025-12-11 10:14:51.29791912 +0000 UTC m=+1264.157782433" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.313798 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 10:14:51 crc kubenswrapper[4746]: E1211 10:14:51.314744 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94ccb677-c026-43d0-8a64-b5267ee040e3" containerName="nova-cell1-conductor-db-sync" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.314781 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="94ccb677-c026-43d0-8a64-b5267ee040e3" containerName="nova-cell1-conductor-db-sync" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.315108 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="94ccb677-c026-43d0-8a64-b5267ee040e3" containerName="nova-cell1-conductor-db-sync" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.316489 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.324579 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.329214 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.466256 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbcqw\" (UniqueName: \"kubernetes.io/projected/3d3d3996-336f-4ca7-a8eb-16a243b55115-kube-api-access-pbcqw\") pod \"nova-cell1-conductor-0\" (UID: \"3d3d3996-336f-4ca7-a8eb-16a243b55115\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.466355 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d3996-336f-4ca7-a8eb-16a243b55115-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d3d3996-336f-4ca7-a8eb-16a243b55115\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.466613 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d3996-336f-4ca7-a8eb-16a243b55115-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d3d3996-336f-4ca7-a8eb-16a243b55115\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.568839 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d3996-336f-4ca7-a8eb-16a243b55115-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d3d3996-336f-4ca7-a8eb-16a243b55115\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.568960 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbcqw\" (UniqueName: \"kubernetes.io/projected/3d3d3996-336f-4ca7-a8eb-16a243b55115-kube-api-access-pbcqw\") pod \"nova-cell1-conductor-0\" (UID: \"3d3d3996-336f-4ca7-a8eb-16a243b55115\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.569013 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d3996-336f-4ca7-a8eb-16a243b55115-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d3d3996-336f-4ca7-a8eb-16a243b55115\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.578140 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d3d3996-336f-4ca7-a8eb-16a243b55115-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"3d3d3996-336f-4ca7-a8eb-16a243b55115\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.578240 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3d3d3996-336f-4ca7-a8eb-16a243b55115-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"3d3d3996-336f-4ca7-a8eb-16a243b55115\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.591140 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbcqw\" (UniqueName: \"kubernetes.io/projected/3d3d3996-336f-4ca7-a8eb-16a243b55115-kube-api-access-pbcqw\") pod \"nova-cell1-conductor-0\" (UID: \"3d3d3996-336f-4ca7-a8eb-16a243b55115\") " pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:51 crc kubenswrapper[4746]: I1211 10:14:51.642581 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:52 crc kubenswrapper[4746]: I1211 10:14:52.351488 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 11 10:14:53 crc kubenswrapper[4746]: I1211 10:14:53.243161 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e726807-c33c-4d85-9165-5e7646b0d813","Type":"ContainerStarted","Data":"37b8d35e70250950d9bbe0b5165c5f060ca3923c3b6a2a4227417c85cd8188ea"} Dec 11 10:14:53 crc kubenswrapper[4746]: I1211 10:14:53.243648 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:14:53 crc kubenswrapper[4746]: I1211 10:14:53.249831 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3d3d3996-336f-4ca7-a8eb-16a243b55115","Type":"ContainerStarted","Data":"87568dbe6aac91cd6b7927799651272afcd529def8b93d244da021d5c35734a6"} Dec 11 10:14:53 crc kubenswrapper[4746]: I1211 10:14:53.249886 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"3d3d3996-336f-4ca7-a8eb-16a243b55115","Type":"ContainerStarted","Data":"9fcfb25f83c0fb470551f7191db9ef4e9c06976186b192f24da2db255d9c2778"} Dec 11 10:14:53 crc kubenswrapper[4746]: I1211 10:14:53.250367 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 11 10:14:53 crc kubenswrapper[4746]: I1211 10:14:53.292625 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.597510988 podStartE2EDuration="9.29260362s" podCreationTimestamp="2025-12-11 10:14:44 +0000 UTC" firstStartedPulling="2025-12-11 10:14:45.336228021 +0000 UTC m=+1258.196091334" lastFinishedPulling="2025-12-11 10:14:52.031320653 +0000 UTC m=+1264.891183966" observedRunningTime="2025-12-11 10:14:53.274316448 +0000 UTC m=+1266.134179791" watchObservedRunningTime="2025-12-11 10:14:53.29260362 +0000 UTC m=+1266.152466923" Dec 11 10:14:53 crc kubenswrapper[4746]: I1211 10:14:53.319612 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.319576314 podStartE2EDuration="2.319576314s" podCreationTimestamp="2025-12-11 10:14:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:14:53.306214464 +0000 UTC m=+1266.166077797" watchObservedRunningTime="2025-12-11 10:14:53.319576314 +0000 UTC m=+1266.179439627" Dec 11 10:14:54 crc kubenswrapper[4746]: I1211 10:14:54.587629 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 10:14:59 crc kubenswrapper[4746]: I1211 10:14:59.200271 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:14:59 crc kubenswrapper[4746]: I1211 10:14:59.201780 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:14:59 crc kubenswrapper[4746]: I1211 10:14:59.587681 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 10:14:59 crc kubenswrapper[4746]: I1211 10:14:59.647640 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 10:14:59 crc kubenswrapper[4746]: I1211 10:14:59.877234 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:14:59 crc kubenswrapper[4746]: I1211 10:14:59.877770 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:14:59 crc kubenswrapper[4746]: I1211 10:14:59.877834 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:14:59 crc kubenswrapper[4746]: I1211 10:14:59.878903 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c83e849437aeb890459f914e7f689680afdaaf0b057ea749f6e91f887067183f"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:14:59 crc kubenswrapper[4746]: I1211 10:14:59.878975 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://c83e849437aeb890459f914e7f689680afdaaf0b057ea749f6e91f887067183f" gracePeriod=600 Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.167456 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2"] Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.171134 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.176578 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.176911 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.182419 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2"] Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.288449 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.289033 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.193:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.336952 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-secret-volume\") pod \"collect-profiles-29424135-rkgn2\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.337151 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v6sr\" (UniqueName: \"kubernetes.io/projected/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-kube-api-access-2v6sr\") pod \"collect-profiles-29424135-rkgn2\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.337217 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-config-volume\") pod \"collect-profiles-29424135-rkgn2\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.346536 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="c83e849437aeb890459f914e7f689680afdaaf0b057ea749f6e91f887067183f" exitCode=0 Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.346623 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"c83e849437aeb890459f914e7f689680afdaaf0b057ea749f6e91f887067183f"} Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.346692 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"c4bc2bbb26d764668868d6659aa470877d6623d9d959c05982277a00cdacbca4"} Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.346715 4746 scope.go:117] "RemoveContainer" containerID="f3599b9865470e5f66c552862b8f5ba28a4b29a63faedd683cf231e8c14b3f2f" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.386683 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.442865 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v6sr\" (UniqueName: \"kubernetes.io/projected/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-kube-api-access-2v6sr\") pod \"collect-profiles-29424135-rkgn2\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.443172 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-config-volume\") pod \"collect-profiles-29424135-rkgn2\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.443294 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-secret-volume\") pod \"collect-profiles-29424135-rkgn2\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.448347 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-config-volume\") pod \"collect-profiles-29424135-rkgn2\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.460968 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-secret-volume\") pod \"collect-profiles-29424135-rkgn2\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.483962 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v6sr\" (UniqueName: \"kubernetes.io/projected/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-kube-api-access-2v6sr\") pod \"collect-profiles-29424135-rkgn2\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:00 crc kubenswrapper[4746]: I1211 10:15:00.518816 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:01 crc kubenswrapper[4746]: I1211 10:15:01.120861 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2"] Dec 11 10:15:01 crc kubenswrapper[4746]: I1211 10:15:01.366093 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" event={"ID":"bf1663d1-5ec6-49fc-bab8-f2b102daba0a","Type":"ContainerStarted","Data":"5da5bb5d86e53a92d147eb3cd89382a09e258ec358e109760b7f952658fa5031"} Dec 11 10:15:01 crc kubenswrapper[4746]: I1211 10:15:01.704214 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 11 10:15:02 crc kubenswrapper[4746]: I1211 10:15:02.379771 4746 generic.go:334] "Generic (PLEG): container finished" podID="bf1663d1-5ec6-49fc-bab8-f2b102daba0a" containerID="afb9639e9034d99cabe2b4c1e7d5d2d8d22c89154e2307cdbe15052f532bc3c3" exitCode=0 Dec 11 10:15:02 crc kubenswrapper[4746]: I1211 10:15:02.379993 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" event={"ID":"bf1663d1-5ec6-49fc-bab8-f2b102daba0a","Type":"ContainerDied","Data":"afb9639e9034d99cabe2b4c1e7d5d2d8d22c89154e2307cdbe15052f532bc3c3"} Dec 11 10:15:03 crc kubenswrapper[4746]: I1211 10:15:03.929982 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.037609 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-secret-volume\") pod \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.037683 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-config-volume\") pod \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.037729 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v6sr\" (UniqueName: \"kubernetes.io/projected/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-kube-api-access-2v6sr\") pod \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\" (UID: \"bf1663d1-5ec6-49fc-bab8-f2b102daba0a\") " Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.040150 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-config-volume" (OuterVolumeSpecName: "config-volume") pod "bf1663d1-5ec6-49fc-bab8-f2b102daba0a" (UID: "bf1663d1-5ec6-49fc-bab8-f2b102daba0a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.046292 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-kube-api-access-2v6sr" (OuterVolumeSpecName: "kube-api-access-2v6sr") pod "bf1663d1-5ec6-49fc-bab8-f2b102daba0a" (UID: "bf1663d1-5ec6-49fc-bab8-f2b102daba0a"). InnerVolumeSpecName "kube-api-access-2v6sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.046397 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bf1663d1-5ec6-49fc-bab8-f2b102daba0a" (UID: "bf1663d1-5ec6-49fc-bab8-f2b102daba0a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.140953 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.141017 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.141096 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v6sr\" (UniqueName: \"kubernetes.io/projected/bf1663d1-5ec6-49fc-bab8-f2b102daba0a-kube-api-access-2v6sr\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.406977 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" event={"ID":"bf1663d1-5ec6-49fc-bab8-f2b102daba0a","Type":"ContainerDied","Data":"5da5bb5d86e53a92d147eb3cd89382a09e258ec358e109760b7f952658fa5031"} Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.407032 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5da5bb5d86e53a92d147eb3cd89382a09e258ec358e109760b7f952658fa5031" Dec 11 10:15:04 crc kubenswrapper[4746]: I1211 10:15:04.407110 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2" Dec 11 10:15:05 crc kubenswrapper[4746]: W1211 10:15:05.631685 4746 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf1663d1_5ec6_49fc_bab8_f2b102daba0a.slice": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf1663d1_5ec6_49fc_bab8_f2b102daba0a.slice: no such file or directory Dec 11 10:15:05 crc kubenswrapper[4746]: E1211 10:15:05.886341 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08a8ffb5_96db_4f37_aabb_8079f481a245.slice/crio-60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod08a8ffb5_96db_4f37_aabb_8079f481a245.slice/crio-conmon-60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.357143 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.408346 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-combined-ca-bundle\") pod \"08a8ffb5-96db-4f37-aabb-8079f481a245\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.408480 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-config-data\") pod \"08a8ffb5-96db-4f37-aabb-8079f481a245\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.408671 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fntgg\" (UniqueName: \"kubernetes.io/projected/08a8ffb5-96db-4f37-aabb-8079f481a245-kube-api-access-fntgg\") pod \"08a8ffb5-96db-4f37-aabb-8079f481a245\" (UID: \"08a8ffb5-96db-4f37-aabb-8079f481a245\") " Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.416596 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08a8ffb5-96db-4f37-aabb-8079f481a245-kube-api-access-fntgg" (OuterVolumeSpecName: "kube-api-access-fntgg") pod "08a8ffb5-96db-4f37-aabb-8079f481a245" (UID: "08a8ffb5-96db-4f37-aabb-8079f481a245"). InnerVolumeSpecName "kube-api-access-fntgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.441279 4746 generic.go:334] "Generic (PLEG): container finished" podID="08a8ffb5-96db-4f37-aabb-8079f481a245" containerID="60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c" exitCode=137 Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.441361 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08a8ffb5-96db-4f37-aabb-8079f481a245","Type":"ContainerDied","Data":"60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c"} Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.441466 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"08a8ffb5-96db-4f37-aabb-8079f481a245","Type":"ContainerDied","Data":"709da73521e1210c95edbdf5f9adf4b37c29962cce61e3c73b9345f5c1d530af"} Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.441503 4746 scope.go:117] "RemoveContainer" containerID="60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.441407 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.447031 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "08a8ffb5-96db-4f37-aabb-8079f481a245" (UID: "08a8ffb5-96db-4f37-aabb-8079f481a245"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.449413 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-config-data" (OuterVolumeSpecName: "config-data") pod "08a8ffb5-96db-4f37-aabb-8079f481a245" (UID: "08a8ffb5-96db-4f37-aabb-8079f481a245"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.521451 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.521503 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/08a8ffb5-96db-4f37-aabb-8079f481a245-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.521514 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fntgg\" (UniqueName: \"kubernetes.io/projected/08a8ffb5-96db-4f37-aabb-8079f481a245-kube-api-access-fntgg\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.526935 4746 scope.go:117] "RemoveContainer" containerID="60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c" Dec 11 10:15:06 crc kubenswrapper[4746]: E1211 10:15:06.527939 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c\": container with ID starting with 60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c not found: ID does not exist" containerID="60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.527984 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c"} err="failed to get container status \"60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c\": rpc error: code = NotFound desc = could not find container \"60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c\": container with ID starting with 60541b05f5ef2b0ba9dd4b0d51fdae9320457f2e56451b60534d4e553314c52c not found: ID does not exist" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.794029 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.809694 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.835729 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:15:06 crc kubenswrapper[4746]: E1211 10:15:06.836425 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf1663d1-5ec6-49fc-bab8-f2b102daba0a" containerName="collect-profiles" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.836444 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf1663d1-5ec6-49fc-bab8-f2b102daba0a" containerName="collect-profiles" Dec 11 10:15:06 crc kubenswrapper[4746]: E1211 10:15:06.836511 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08a8ffb5-96db-4f37-aabb-8079f481a245" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.836521 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="08a8ffb5-96db-4f37-aabb-8079f481a245" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.836814 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf1663d1-5ec6-49fc-bab8-f2b102daba0a" containerName="collect-profiles" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.836852 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="08a8ffb5-96db-4f37-aabb-8079f481a245" containerName="nova-cell1-novncproxy-novncproxy" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.837810 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.842201 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.842526 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.842774 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.891416 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.929351 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.929416 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.929659 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmdrb\" (UniqueName: \"kubernetes.io/projected/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-kube-api-access-tmdrb\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.929974 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:06 crc kubenswrapper[4746]: I1211 10:15:06.930186 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.032558 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.032671 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.032695 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.032740 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmdrb\" (UniqueName: \"kubernetes.io/projected/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-kube-api-access-tmdrb\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.032796 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.038508 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.039179 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.039635 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.046130 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.051130 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmdrb\" (UniqueName: \"kubernetes.io/projected/12f63c9f-c2d2-45a1-99b7-ee148b220e4d-kube-api-access-tmdrb\") pod \"nova-cell1-novncproxy-0\" (UID: \"12f63c9f-c2d2-45a1-99b7-ee148b220e4d\") " pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.233024 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.459007 4746 generic.go:334] "Generic (PLEG): container finished" podID="70c76243-c909-48db-849e-af02769440b5" containerID="5085ddb8bdb275016fe498556a08e8c06cc1c54191f885e1e49aacf6614bcc6e" exitCode=137 Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.459109 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70c76243-c909-48db-849e-af02769440b5","Type":"ContainerDied","Data":"5085ddb8bdb275016fe498556a08e8c06cc1c54191f885e1e49aacf6614bcc6e"} Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.470728 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.543708 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-config-data\") pod \"70c76243-c909-48db-849e-af02769440b5\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.543937 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5znv2\" (UniqueName: \"kubernetes.io/projected/70c76243-c909-48db-849e-af02769440b5-kube-api-access-5znv2\") pod \"70c76243-c909-48db-849e-af02769440b5\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.544011 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-combined-ca-bundle\") pod \"70c76243-c909-48db-849e-af02769440b5\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.544060 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c76243-c909-48db-849e-af02769440b5-logs\") pod \"70c76243-c909-48db-849e-af02769440b5\" (UID: \"70c76243-c909-48db-849e-af02769440b5\") " Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.544869 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c76243-c909-48db-849e-af02769440b5-logs" (OuterVolumeSpecName: "logs") pod "70c76243-c909-48db-849e-af02769440b5" (UID: "70c76243-c909-48db-849e-af02769440b5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.552527 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c76243-c909-48db-849e-af02769440b5-kube-api-access-5znv2" (OuterVolumeSpecName: "kube-api-access-5znv2") pod "70c76243-c909-48db-849e-af02769440b5" (UID: "70c76243-c909-48db-849e-af02769440b5"). InnerVolumeSpecName "kube-api-access-5znv2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.577521 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70c76243-c909-48db-849e-af02769440b5" (UID: "70c76243-c909-48db-849e-af02769440b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.581181 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-config-data" (OuterVolumeSpecName: "config-data") pod "70c76243-c909-48db-849e-af02769440b5" (UID: "70c76243-c909-48db-849e-af02769440b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.642570 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08a8ffb5-96db-4f37-aabb-8079f481a245" path="/var/lib/kubelet/pods/08a8ffb5-96db-4f37-aabb-8079f481a245/volumes" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.646102 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.646162 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5znv2\" (UniqueName: \"kubernetes.io/projected/70c76243-c909-48db-849e-af02769440b5-kube-api-access-5znv2\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.646174 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70c76243-c909-48db-849e-af02769440b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.646184 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70c76243-c909-48db-849e-af02769440b5-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:07 crc kubenswrapper[4746]: I1211 10:15:07.766897 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.491209 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"70c76243-c909-48db-849e-af02769440b5","Type":"ContainerDied","Data":"d21cb8d1a72ff07bc970ac6c06e68c926829be153734d881e850d72285dcd012"} Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.491362 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.493803 4746 scope.go:117] "RemoveContainer" containerID="5085ddb8bdb275016fe498556a08e8c06cc1c54191f885e1e49aacf6614bcc6e" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.493997 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"12f63c9f-c2d2-45a1-99b7-ee148b220e4d","Type":"ContainerStarted","Data":"46ae384c184852a1fd4fc7479e2dc855e2d7ca89e1b5fff4b6d0cc8c15f8545e"} Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.494079 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"12f63c9f-c2d2-45a1-99b7-ee148b220e4d","Type":"ContainerStarted","Data":"c6a69484de1f20b5795fe0b7bbad0a47cd18d148d052a5fe228b8b6713bfe14d"} Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.538628 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.538604922 podStartE2EDuration="2.538604922s" podCreationTimestamp="2025-12-11 10:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:15:08.52548773 +0000 UTC m=+1281.385351073" watchObservedRunningTime="2025-12-11 10:15:08.538604922 +0000 UTC m=+1281.398468255" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.552223 4746 scope.go:117] "RemoveContainer" containerID="87abe561ca8bb003f1d56b9682b84ecd444485d1a34aa4de672fb5a9aa400dae" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.555144 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.574513 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.585606 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:08 crc kubenswrapper[4746]: E1211 10:15:08.586368 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c76243-c909-48db-849e-af02769440b5" containerName="nova-metadata-metadata" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.586394 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c76243-c909-48db-849e-af02769440b5" containerName="nova-metadata-metadata" Dec 11 10:15:08 crc kubenswrapper[4746]: E1211 10:15:08.586422 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c76243-c909-48db-849e-af02769440b5" containerName="nova-metadata-log" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.586430 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c76243-c909-48db-849e-af02769440b5" containerName="nova-metadata-log" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.586776 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c76243-c909-48db-849e-af02769440b5" containerName="nova-metadata-metadata" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.586817 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c76243-c909-48db-849e-af02769440b5" containerName="nova-metadata-log" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.589981 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.596091 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.599990 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.605017 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.669036 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a1abd9-6106-4686-afdc-db0324ce2e36-logs\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.669177 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vptzw\" (UniqueName: \"kubernetes.io/projected/53a1abd9-6106-4686-afdc-db0324ce2e36-kube-api-access-vptzw\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.669218 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-config-data\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.669286 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.669516 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.772294 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vptzw\" (UniqueName: \"kubernetes.io/projected/53a1abd9-6106-4686-afdc-db0324ce2e36-kube-api-access-vptzw\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.772396 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-config-data\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.772427 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.772531 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.772648 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a1abd9-6106-4686-afdc-db0324ce2e36-logs\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.773333 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a1abd9-6106-4686-afdc-db0324ce2e36-logs\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.780714 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-config-data\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.781971 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.790545 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.795934 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vptzw\" (UniqueName: \"kubernetes.io/projected/53a1abd9-6106-4686-afdc-db0324ce2e36-kube-api-access-vptzw\") pod \"nova-metadata-0\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " pod="openstack/nova-metadata-0" Dec 11 10:15:08 crc kubenswrapper[4746]: I1211 10:15:08.931132 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.210760 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.212020 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.216426 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.226153 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 10:15:09 crc kubenswrapper[4746]: W1211 10:15:09.492080 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53a1abd9_6106_4686_afdc_db0324ce2e36.slice/crio-84de99168f6da84f8692728eb2b385610eb2b2739f26d71c9122397c8338ac70 WatchSource:0}: Error finding container 84de99168f6da84f8692728eb2b385610eb2b2739f26d71c9122397c8338ac70: Status 404 returned error can't find the container with id 84de99168f6da84f8692728eb2b385610eb2b2739f26d71c9122397c8338ac70 Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.503255 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.513547 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a1abd9-6106-4686-afdc-db0324ce2e36","Type":"ContainerStarted","Data":"84de99168f6da84f8692728eb2b385610eb2b2739f26d71c9122397c8338ac70"} Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.516644 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.527700 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.702650 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c76243-c909-48db-849e-af02769440b5" path="/var/lib/kubelet/pods/70c76243-c909-48db-849e-af02769440b5/volumes" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.872248 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-n7t48"] Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.874607 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.933942 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-n7t48"] Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.960939 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j69nq\" (UniqueName: \"kubernetes.io/projected/32bb2966-d412-43e4-978e-21dc59433b4c-kube-api-access-j69nq\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.960998 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-config\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.961074 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.961125 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.961145 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:09 crc kubenswrapper[4746]: I1211 10:15:09.961169 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.064012 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.064204 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.064250 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.064286 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.064468 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j69nq\" (UniqueName: \"kubernetes.io/projected/32bb2966-d412-43e4-978e-21dc59433b4c-kube-api-access-j69nq\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.064529 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-config\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.065973 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-config\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.066861 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-sb\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.078111 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-nb\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.080323 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-svc\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.080999 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-swift-storage-0\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.122429 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j69nq\" (UniqueName: \"kubernetes.io/projected/32bb2966-d412-43e4-978e-21dc59433b4c-kube-api-access-j69nq\") pod \"dnsmasq-dns-cd5cbd7b9-n7t48\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.247908 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:10 crc kubenswrapper[4746]: I1211 10:15:10.872581 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-n7t48"] Dec 11 10:15:10 crc kubenswrapper[4746]: W1211 10:15:10.873160 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32bb2966_d412_43e4_978e_21dc59433b4c.slice/crio-fe45cf995604a98fad94ab4b4e34dc5d0ad6230c2b49261231dbdca1c922957e WatchSource:0}: Error finding container fe45cf995604a98fad94ab4b4e34dc5d0ad6230c2b49261231dbdca1c922957e: Status 404 returned error can't find the container with id fe45cf995604a98fad94ab4b4e34dc5d0ad6230c2b49261231dbdca1c922957e Dec 11 10:15:11 crc kubenswrapper[4746]: I1211 10:15:11.544806 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" event={"ID":"32bb2966-d412-43e4-978e-21dc59433b4c","Type":"ContainerStarted","Data":"fe45cf995604a98fad94ab4b4e34dc5d0ad6230c2b49261231dbdca1c922957e"} Dec 11 10:15:11 crc kubenswrapper[4746]: I1211 10:15:11.547383 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a1abd9-6106-4686-afdc-db0324ce2e36","Type":"ContainerStarted","Data":"71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7"} Dec 11 10:15:12 crc kubenswrapper[4746]: I1211 10:15:12.233415 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:12 crc kubenswrapper[4746]: I1211 10:15:12.568797 4746 generic.go:334] "Generic (PLEG): container finished" podID="32bb2966-d412-43e4-978e-21dc59433b4c" containerID="25de4006a56762c77186d2deeaa77a2ec106c881595169bffcc54840fb639d0e" exitCode=0 Dec 11 10:15:12 crc kubenswrapper[4746]: I1211 10:15:12.568903 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" event={"ID":"32bb2966-d412-43e4-978e-21dc59433b4c","Type":"ContainerDied","Data":"25de4006a56762c77186d2deeaa77a2ec106c881595169bffcc54840fb639d0e"} Dec 11 10:15:12 crc kubenswrapper[4746]: I1211 10:15:12.585663 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a1abd9-6106-4686-afdc-db0324ce2e36","Type":"ContainerStarted","Data":"9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca"} Dec 11 10:15:12 crc kubenswrapper[4746]: I1211 10:15:12.662721 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.662692569 podStartE2EDuration="4.662692569s" podCreationTimestamp="2025-12-11 10:15:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:15:12.630353611 +0000 UTC m=+1285.490216944" watchObservedRunningTime="2025-12-11 10:15:12.662692569 +0000 UTC m=+1285.522555882" Dec 11 10:15:12 crc kubenswrapper[4746]: I1211 10:15:12.699099 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:12 crc kubenswrapper[4746]: I1211 10:15:12.699772 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" containerName="nova-api-log" containerID="cri-o://6dce7b3e3827d91044d5c21f17a7b17876f7c53cd781fa4ad54eab29d39d5b50" gracePeriod=30 Dec 11 10:15:12 crc kubenswrapper[4746]: I1211 10:15:12.700575 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" containerName="nova-api-api" containerID="cri-o://07f8f5de03ceea2461e85f3bdb936f90adbd0800888f38cf43545a9ed314f547" gracePeriod=30 Dec 11 10:15:13 crc kubenswrapper[4746]: I1211 10:15:13.931562 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 10:15:13 crc kubenswrapper[4746]: I1211 10:15:13.931646 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 10:15:14 crc kubenswrapper[4746]: I1211 10:15:14.527690 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 10:15:14 crc kubenswrapper[4746]: I1211 10:15:14.687354 4746 generic.go:334] "Generic (PLEG): container finished" podID="a15a9087-40ab-474e-802c-8f5e838abd15" containerID="6dce7b3e3827d91044d5c21f17a7b17876f7c53cd781fa4ad54eab29d39d5b50" exitCode=143 Dec 11 10:15:14 crc kubenswrapper[4746]: I1211 10:15:14.687494 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a15a9087-40ab-474e-802c-8f5e838abd15","Type":"ContainerDied","Data":"6dce7b3e3827d91044d5c21f17a7b17876f7c53cd781fa4ad54eab29d39d5b50"} Dec 11 10:15:14 crc kubenswrapper[4746]: I1211 10:15:14.695160 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" event={"ID":"32bb2966-d412-43e4-978e-21dc59433b4c","Type":"ContainerStarted","Data":"5c7912a5988f292fb586e51c36eb144d833e6e4d61e3ba721601e58f7f8341a4"} Dec 11 10:15:14 crc kubenswrapper[4746]: I1211 10:15:14.695240 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:14 crc kubenswrapper[4746]: I1211 10:15:14.716277 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" podStartSLOduration=5.71625809 podStartE2EDuration="5.71625809s" podCreationTimestamp="2025-12-11 10:15:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:15:14.713099714 +0000 UTC m=+1287.572963027" watchObservedRunningTime="2025-12-11 10:15:14.71625809 +0000 UTC m=+1287.576121403" Dec 11 10:15:15 crc kubenswrapper[4746]: I1211 10:15:15.199310 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:15:15 crc kubenswrapper[4746]: I1211 10:15:15.199623 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="ceilometer-central-agent" containerID="cri-o://d432af5db78935cd70aa39cd729eef869d509f0d91565a035720ff344b1f8881" gracePeriod=30 Dec 11 10:15:15 crc kubenswrapper[4746]: I1211 10:15:15.199709 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="proxy-httpd" containerID="cri-o://37b8d35e70250950d9bbe0b5165c5f060ca3923c3b6a2a4227417c85cd8188ea" gracePeriod=30 Dec 11 10:15:15 crc kubenswrapper[4746]: I1211 10:15:15.199793 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="ceilometer-notification-agent" containerID="cri-o://aa62ee80f757f09799759ee272c8eb7e44293fb3646eefd6c716d24921b4bb3e" gracePeriod=30 Dec 11 10:15:15 crc kubenswrapper[4746]: I1211 10:15:15.199744 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="sg-core" containerID="cri-o://f612d6c58f9a2e937bb8175f4eb4b30f92f48ec22494ca3a2a721d0995240d37" gracePeriod=30 Dec 11 10:15:15 crc kubenswrapper[4746]: I1211 10:15:15.709784 4746 generic.go:334] "Generic (PLEG): container finished" podID="5e726807-c33c-4d85-9165-5e7646b0d813" containerID="37b8d35e70250950d9bbe0b5165c5f060ca3923c3b6a2a4227417c85cd8188ea" exitCode=0 Dec 11 10:15:15 crc kubenswrapper[4746]: I1211 10:15:15.710212 4746 generic.go:334] "Generic (PLEG): container finished" podID="5e726807-c33c-4d85-9165-5e7646b0d813" containerID="f612d6c58f9a2e937bb8175f4eb4b30f92f48ec22494ca3a2a721d0995240d37" exitCode=2 Dec 11 10:15:15 crc kubenswrapper[4746]: I1211 10:15:15.709869 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e726807-c33c-4d85-9165-5e7646b0d813","Type":"ContainerDied","Data":"37b8d35e70250950d9bbe0b5165c5f060ca3923c3b6a2a4227417c85cd8188ea"} Dec 11 10:15:15 crc kubenswrapper[4746]: I1211 10:15:15.710291 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e726807-c33c-4d85-9165-5e7646b0d813","Type":"ContainerDied","Data":"f612d6c58f9a2e937bb8175f4eb4b30f92f48ec22494ca3a2a721d0995240d37"} Dec 11 10:15:16 crc kubenswrapper[4746]: E1211 10:15:16.217978 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e726807_c33c_4d85_9165_5e7646b0d813.slice/crio-conmon-aa62ee80f757f09799759ee272c8eb7e44293fb3646eefd6c716d24921b4bb3e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5e726807_c33c_4d85_9165_5e7646b0d813.slice/crio-aa62ee80f757f09799759ee272c8eb7e44293fb3646eefd6c716d24921b4bb3e.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.727584 4746 generic.go:334] "Generic (PLEG): container finished" podID="5e726807-c33c-4d85-9165-5e7646b0d813" containerID="aa62ee80f757f09799759ee272c8eb7e44293fb3646eefd6c716d24921b4bb3e" exitCode=0 Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.728485 4746 generic.go:334] "Generic (PLEG): container finished" podID="5e726807-c33c-4d85-9165-5e7646b0d813" containerID="d432af5db78935cd70aa39cd729eef869d509f0d91565a035720ff344b1f8881" exitCode=0 Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.728591 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e726807-c33c-4d85-9165-5e7646b0d813","Type":"ContainerDied","Data":"aa62ee80f757f09799759ee272c8eb7e44293fb3646eefd6c716d24921b4bb3e"} Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.728630 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e726807-c33c-4d85-9165-5e7646b0d813","Type":"ContainerDied","Data":"d432af5db78935cd70aa39cd729eef869d509f0d91565a035720ff344b1f8881"} Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.728641 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5e726807-c33c-4d85-9165-5e7646b0d813","Type":"ContainerDied","Data":"5d487a1e21a4a8d0540c62b6e5401049df85ab1744e35720f13639761f45ec6a"} Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.728653 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d487a1e21a4a8d0540c62b6e5401049df85ab1744e35720f13639761f45ec6a" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.736255 4746 generic.go:334] "Generic (PLEG): container finished" podID="a15a9087-40ab-474e-802c-8f5e838abd15" containerID="07f8f5de03ceea2461e85f3bdb936f90adbd0800888f38cf43545a9ed314f547" exitCode=0 Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.737506 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a15a9087-40ab-474e-802c-8f5e838abd15","Type":"ContainerDied","Data":"07f8f5de03ceea2461e85f3bdb936f90adbd0800888f38cf43545a9ed314f547"} Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.777736 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.806384 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-ceilometer-tls-certs\") pod \"5e726807-c33c-4d85-9165-5e7646b0d813\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.806475 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-run-httpd\") pod \"5e726807-c33c-4d85-9165-5e7646b0d813\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.806560 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-combined-ca-bundle\") pod \"5e726807-c33c-4d85-9165-5e7646b0d813\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.806613 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-config-data\") pod \"5e726807-c33c-4d85-9165-5e7646b0d813\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.806673 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-log-httpd\") pod \"5e726807-c33c-4d85-9165-5e7646b0d813\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.806962 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-sg-core-conf-yaml\") pod \"5e726807-c33c-4d85-9165-5e7646b0d813\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.807244 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gff5g\" (UniqueName: \"kubernetes.io/projected/5e726807-c33c-4d85-9165-5e7646b0d813-kube-api-access-gff5g\") pod \"5e726807-c33c-4d85-9165-5e7646b0d813\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.807305 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-scripts\") pod \"5e726807-c33c-4d85-9165-5e7646b0d813\" (UID: \"5e726807-c33c-4d85-9165-5e7646b0d813\") " Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.810980 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5e726807-c33c-4d85-9165-5e7646b0d813" (UID: "5e726807-c33c-4d85-9165-5e7646b0d813"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.813595 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5e726807-c33c-4d85-9165-5e7646b0d813" (UID: "5e726807-c33c-4d85-9165-5e7646b0d813"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.827420 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e726807-c33c-4d85-9165-5e7646b0d813-kube-api-access-gff5g" (OuterVolumeSpecName: "kube-api-access-gff5g") pod "5e726807-c33c-4d85-9165-5e7646b0d813" (UID: "5e726807-c33c-4d85-9165-5e7646b0d813"). InnerVolumeSpecName "kube-api-access-gff5g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.834108 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-scripts" (OuterVolumeSpecName: "scripts") pod "5e726807-c33c-4d85-9165-5e7646b0d813" (UID: "5e726807-c33c-4d85-9165-5e7646b0d813"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.894002 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5e726807-c33c-4d85-9165-5e7646b0d813" (UID: "5e726807-c33c-4d85-9165-5e7646b0d813"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.912331 4746 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.912374 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.912390 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gff5g\" (UniqueName: \"kubernetes.io/projected/5e726807-c33c-4d85-9165-5e7646b0d813-kube-api-access-gff5g\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.912402 4746 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.912415 4746 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5e726807-c33c-4d85-9165-5e7646b0d813-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:16 crc kubenswrapper[4746]: I1211 10:15:16.962653 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.003875 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e726807-c33c-4d85-9165-5e7646b0d813" (UID: "5e726807-c33c-4d85-9165-5e7646b0d813"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.007443 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "5e726807-c33c-4d85-9165-5e7646b0d813" (UID: "5e726807-c33c-4d85-9165-5e7646b0d813"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.020779 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15a9087-40ab-474e-802c-8f5e838abd15-logs\") pod \"a15a9087-40ab-474e-802c-8f5e838abd15\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.020876 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf9w5\" (UniqueName: \"kubernetes.io/projected/a15a9087-40ab-474e-802c-8f5e838abd15-kube-api-access-wf9w5\") pod \"a15a9087-40ab-474e-802c-8f5e838abd15\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.020910 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-combined-ca-bundle\") pod \"a15a9087-40ab-474e-802c-8f5e838abd15\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.021087 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-config-data\") pod \"a15a9087-40ab-474e-802c-8f5e838abd15\" (UID: \"a15a9087-40ab-474e-802c-8f5e838abd15\") " Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.026251 4746 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.026326 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.027525 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a15a9087-40ab-474e-802c-8f5e838abd15-logs" (OuterVolumeSpecName: "logs") pod "a15a9087-40ab-474e-802c-8f5e838abd15" (UID: "a15a9087-40ab-474e-802c-8f5e838abd15"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.030880 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-config-data" (OuterVolumeSpecName: "config-data") pod "5e726807-c33c-4d85-9165-5e7646b0d813" (UID: "5e726807-c33c-4d85-9165-5e7646b0d813"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.045668 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a15a9087-40ab-474e-802c-8f5e838abd15-kube-api-access-wf9w5" (OuterVolumeSpecName: "kube-api-access-wf9w5") pod "a15a9087-40ab-474e-802c-8f5e838abd15" (UID: "a15a9087-40ab-474e-802c-8f5e838abd15"). InnerVolumeSpecName "kube-api-access-wf9w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.108198 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a15a9087-40ab-474e-802c-8f5e838abd15" (UID: "a15a9087-40ab-474e-802c-8f5e838abd15"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.128568 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf9w5\" (UniqueName: \"kubernetes.io/projected/a15a9087-40ab-474e-802c-8f5e838abd15-kube-api-access-wf9w5\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.128623 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.128637 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5e726807-c33c-4d85-9165-5e7646b0d813-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.128648 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15a9087-40ab-474e-802c-8f5e838abd15-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.139333 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-config-data" (OuterVolumeSpecName: "config-data") pod "a15a9087-40ab-474e-802c-8f5e838abd15" (UID: "a15a9087-40ab-474e-802c-8f5e838abd15"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.230935 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15a9087-40ab-474e-802c-8f5e838abd15-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.233366 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.262843 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.751355 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a15a9087-40ab-474e-802c-8f5e838abd15","Type":"ContainerDied","Data":"1c4bb2e53a68ca924e544350bda232e455b43d080142c819439a271d5a28d36b"} Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.751936 4746 scope.go:117] "RemoveContainer" containerID="07f8f5de03ceea2461e85f3bdb936f90adbd0800888f38cf43545a9ed314f547" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.751501 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.751583 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.789013 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.789916 4746 scope.go:117] "RemoveContainer" containerID="6dce7b3e3827d91044d5c21f17a7b17876f7c53cd781fa4ad54eab29d39d5b50" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.804460 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.831160 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.878700 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.895996 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.920950 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:15:17 crc kubenswrapper[4746]: E1211 10:15:17.921696 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" containerName="nova-api-log" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.921711 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" containerName="nova-api-log" Dec 11 10:15:17 crc kubenswrapper[4746]: E1211 10:15:17.921734 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="ceilometer-central-agent" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.921740 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="ceilometer-central-agent" Dec 11 10:15:17 crc kubenswrapper[4746]: E1211 10:15:17.921758 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" containerName="nova-api-api" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.921766 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" containerName="nova-api-api" Dec 11 10:15:17 crc kubenswrapper[4746]: E1211 10:15:17.921788 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="proxy-httpd" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.921795 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="proxy-httpd" Dec 11 10:15:17 crc kubenswrapper[4746]: E1211 10:15:17.921804 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="sg-core" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.921811 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="sg-core" Dec 11 10:15:17 crc kubenswrapper[4746]: E1211 10:15:17.921830 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="ceilometer-notification-agent" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.921836 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="ceilometer-notification-agent" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.922064 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" containerName="nova-api-api" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.922075 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="sg-core" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.922090 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" containerName="nova-api-log" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.922103 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="proxy-httpd" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.922118 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="ceilometer-central-agent" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.922128 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" containerName="ceilometer-notification-agent" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.924964 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.928282 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.928534 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.930470 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.945141 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.947380 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.956196 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.956459 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.956594 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.956692 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:15:17 crc kubenswrapper[4746]: I1211 10:15:17.974114 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059317 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059463 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-config-data\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059503 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-public-tls-certs\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059531 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-log-httpd\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059560 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059584 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-scripts\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059603 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059641 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wzzm\" (UniqueName: \"kubernetes.io/projected/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-kube-api-access-9wzzm\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059674 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059702 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-config-data\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059883 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed850608-5640-46d7-bc61-6f88fce9e517-logs\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.059910 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-run-httpd\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.060163 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.060227 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhgr\" (UniqueName: \"kubernetes.io/projected/ed850608-5640-46d7-bc61-6f88fce9e517-kube-api-access-4rhgr\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163063 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-log-httpd\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163130 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163156 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-scripts\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163170 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163204 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wzzm\" (UniqueName: \"kubernetes.io/projected/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-kube-api-access-9wzzm\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163236 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163258 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-config-data\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163365 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed850608-5640-46d7-bc61-6f88fce9e517-logs\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163391 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-run-httpd\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163433 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163463 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rhgr\" (UniqueName: \"kubernetes.io/projected/ed850608-5640-46d7-bc61-6f88fce9e517-kube-api-access-4rhgr\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163496 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163565 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-config-data\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163592 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-public-tls-certs\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.163729 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-log-httpd\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.164942 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed850608-5640-46d7-bc61-6f88fce9e517-logs\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.165354 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-run-httpd\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.176819 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.176844 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.177531 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.177614 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-scripts\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.177656 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-config-data\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.180116 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.183669 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.184191 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-public-tls-certs\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.197071 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rhgr\" (UniqueName: \"kubernetes.io/projected/ed850608-5640-46d7-bc61-6f88fce9e517-kube-api-access-4rhgr\") pod \"nova-api-0\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.197525 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-config-data\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.207205 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wzzm\" (UniqueName: \"kubernetes.io/projected/7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad-kube-api-access-9wzzm\") pod \"ceilometer-0\" (UID: \"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad\") " pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.244771 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-rtspz"] Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.247371 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.250136 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.251311 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.256998 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rtspz"] Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.276889 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.289473 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.380450 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlt4c\" (UniqueName: \"kubernetes.io/projected/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-kube-api-access-mlt4c\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.380558 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-config-data\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.380834 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-scripts\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.380887 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.484275 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlt4c\" (UniqueName: \"kubernetes.io/projected/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-kube-api-access-mlt4c\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.484398 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-config-data\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.484539 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-scripts\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.484570 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.493665 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.494370 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-scripts\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.502985 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-config-data\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.511908 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlt4c\" (UniqueName: \"kubernetes.io/projected/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-kube-api-access-mlt4c\") pod \"nova-cell1-cell-mapping-rtspz\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.606906 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.932305 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.932674 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 10:15:18 crc kubenswrapper[4746]: I1211 10:15:18.940938 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:19 crc kubenswrapper[4746]: I1211 10:15:19.077466 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 11 10:15:19 crc kubenswrapper[4746]: W1211 10:15:19.259329 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6aa3845_a29e_4f96_b04a_d3ac1ffad8ca.slice/crio-e51041ce3ac6a7eb406fdfa9cf4ec960eae86fb5bc2b2ca69d6dd3495e1e1ab9 WatchSource:0}: Error finding container e51041ce3ac6a7eb406fdfa9cf4ec960eae86fb5bc2b2ca69d6dd3495e1e1ab9: Status 404 returned error can't find the container with id e51041ce3ac6a7eb406fdfa9cf4ec960eae86fb5bc2b2ca69d6dd3495e1e1ab9 Dec 11 10:15:19 crc kubenswrapper[4746]: I1211 10:15:19.306079 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-rtspz"] Dec 11 10:15:19 crc kubenswrapper[4746]: I1211 10:15:19.676450 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e726807-c33c-4d85-9165-5e7646b0d813" path="/var/lib/kubelet/pods/5e726807-c33c-4d85-9165-5e7646b0d813/volumes" Dec 11 10:15:19 crc kubenswrapper[4746]: I1211 10:15:19.677891 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a15a9087-40ab-474e-802c-8f5e838abd15" path="/var/lib/kubelet/pods/a15a9087-40ab-474e-802c-8f5e838abd15/volumes" Dec 11 10:15:19 crc kubenswrapper[4746]: I1211 10:15:19.802959 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad","Type":"ContainerStarted","Data":"207e2232097a0ab389b65333436c37041590365c403c3abe8a7550ed03eca981"} Dec 11 10:15:19 crc kubenswrapper[4746]: I1211 10:15:19.806862 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rtspz" event={"ID":"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca","Type":"ContainerStarted","Data":"e51041ce3ac6a7eb406fdfa9cf4ec960eae86fb5bc2b2ca69d6dd3495e1e1ab9"} Dec 11 10:15:19 crc kubenswrapper[4746]: I1211 10:15:19.823317 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed850608-5640-46d7-bc61-6f88fce9e517","Type":"ContainerStarted","Data":"b1ba49505b96684a43a6f2103255d01e975179a0b2be3572576fd380d8359304"} Dec 11 10:15:19 crc kubenswrapper[4746]: I1211 10:15:19.823402 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed850608-5640-46d7-bc61-6f88fce9e517","Type":"ContainerStarted","Data":"ccb10b728ef1971492f21ef0ee0535f5ebd23209e9af2a1dd8444ff8593cd6af"} Dec 11 10:15:19 crc kubenswrapper[4746]: I1211 10:15:19.957467 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 10:15:19 crc kubenswrapper[4746]: I1211 10:15:19.958321 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 10:15:20 crc kubenswrapper[4746]: I1211 10:15:20.250324 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:15:20 crc kubenswrapper[4746]: I1211 10:15:20.428575 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-5ngml"] Dec 11 10:15:20 crc kubenswrapper[4746]: I1211 10:15:20.428985 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" podUID="bc9c32d3-f278-46f4-8c02-ddfdf905c39d" containerName="dnsmasq-dns" containerID="cri-o://d10ff6167d55fa9ca3754f1f5a9dea0f08757f6cc78184bc59958d24716a8778" gracePeriod=10 Dec 11 10:15:20 crc kubenswrapper[4746]: I1211 10:15:20.859482 4746 generic.go:334] "Generic (PLEG): container finished" podID="bc9c32d3-f278-46f4-8c02-ddfdf905c39d" containerID="d10ff6167d55fa9ca3754f1f5a9dea0f08757f6cc78184bc59958d24716a8778" exitCode=0 Dec 11 10:15:20 crc kubenswrapper[4746]: I1211 10:15:20.860164 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" event={"ID":"bc9c32d3-f278-46f4-8c02-ddfdf905c39d","Type":"ContainerDied","Data":"d10ff6167d55fa9ca3754f1f5a9dea0f08757f6cc78184bc59958d24716a8778"} Dec 11 10:15:20 crc kubenswrapper[4746]: I1211 10:15:20.861851 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad","Type":"ContainerStarted","Data":"ab6e6ded303802a43794c8ac3d5263c9bb28962aa9c114f85fa4d581843dc297"} Dec 11 10:15:20 crc kubenswrapper[4746]: I1211 10:15:20.863155 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rtspz" event={"ID":"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca","Type":"ContainerStarted","Data":"21bcddad0e871d6889940a9b8d1dd13c838566ca365d8a6b601f889ecfc780b3"} Dec 11 10:15:20 crc kubenswrapper[4746]: I1211 10:15:20.871776 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed850608-5640-46d7-bc61-6f88fce9e517","Type":"ContainerStarted","Data":"5724739614b87ab1133065730b8aa2fa841ff869bfe50dc7bc7a89f036192594"} Dec 11 10:15:20 crc kubenswrapper[4746]: I1211 10:15:20.915853 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-rtspz" podStartSLOduration=2.915818565 podStartE2EDuration="2.915818565s" podCreationTimestamp="2025-12-11 10:15:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:15:20.904799549 +0000 UTC m=+1293.764662882" watchObservedRunningTime="2025-12-11 10:15:20.915818565 +0000 UTC m=+1293.775681878" Dec 11 10:15:20 crc kubenswrapper[4746]: I1211 10:15:20.951762 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.95173778 podStartE2EDuration="3.95173778s" podCreationTimestamp="2025-12-11 10:15:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:15:20.936676025 +0000 UTC m=+1293.796539338" watchObservedRunningTime="2025-12-11 10:15:20.95173778 +0000 UTC m=+1293.811601113" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.135421 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.237338 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-swift-storage-0\") pod \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.237438 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5crkx\" (UniqueName: \"kubernetes.io/projected/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-kube-api-access-5crkx\") pod \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.237553 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-nb\") pod \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.237600 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-svc\") pod \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.237674 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-sb\") pod \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.237788 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-config\") pod \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\" (UID: \"bc9c32d3-f278-46f4-8c02-ddfdf905c39d\") " Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.246205 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-kube-api-access-5crkx" (OuterVolumeSpecName: "kube-api-access-5crkx") pod "bc9c32d3-f278-46f4-8c02-ddfdf905c39d" (UID: "bc9c32d3-f278-46f4-8c02-ddfdf905c39d"). InnerVolumeSpecName "kube-api-access-5crkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.326187 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc9c32d3-f278-46f4-8c02-ddfdf905c39d" (UID: "bc9c32d3-f278-46f4-8c02-ddfdf905c39d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.337815 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc9c32d3-f278-46f4-8c02-ddfdf905c39d" (UID: "bc9c32d3-f278-46f4-8c02-ddfdf905c39d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.341235 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.341276 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5crkx\" (UniqueName: \"kubernetes.io/projected/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-kube-api-access-5crkx\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.341290 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.345305 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc9c32d3-f278-46f4-8c02-ddfdf905c39d" (UID: "bc9c32d3-f278-46f4-8c02-ddfdf905c39d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.346940 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc9c32d3-f278-46f4-8c02-ddfdf905c39d" (UID: "bc9c32d3-f278-46f4-8c02-ddfdf905c39d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.347242 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-config" (OuterVolumeSpecName: "config") pod "bc9c32d3-f278-46f4-8c02-ddfdf905c39d" (UID: "bc9c32d3-f278-46f4-8c02-ddfdf905c39d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.443533 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.443567 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.443576 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc9c32d3-f278-46f4-8c02-ddfdf905c39d-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.887244 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" event={"ID":"bc9c32d3-f278-46f4-8c02-ddfdf905c39d","Type":"ContainerDied","Data":"80250e7e983ae238348104ed6ae6ec34e35db7f4526c6cf417d42d552b349588"} Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.887834 4746 scope.go:117] "RemoveContainer" containerID="d10ff6167d55fa9ca3754f1f5a9dea0f08757f6cc78184bc59958d24716a8778" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.887281 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bccf8f775-5ngml" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.890803 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad","Type":"ContainerStarted","Data":"e572bdc661ffbdd6ef5ef8b617b0cf5a63b34880268c5b398f0625ba5da2e5d9"} Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.929586 4746 scope.go:117] "RemoveContainer" containerID="399cf3be6ea198b1d59d3e3e8e604bc646c056b9d290dfb6d68e5c85badba28c" Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.941991 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-5ngml"] Dec 11 10:15:21 crc kubenswrapper[4746]: I1211 10:15:21.952525 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bccf8f775-5ngml"] Dec 11 10:15:23 crc kubenswrapper[4746]: I1211 10:15:23.641717 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc9c32d3-f278-46f4-8c02-ddfdf905c39d" path="/var/lib/kubelet/pods/bc9c32d3-f278-46f4-8c02-ddfdf905c39d/volumes" Dec 11 10:15:23 crc kubenswrapper[4746]: I1211 10:15:23.918268 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad","Type":"ContainerStarted","Data":"fc8534fa0ee9e10dbd1fc152e10a879e2bc4bfc3b23f01c17840d24e26319767"} Dec 11 10:15:25 crc kubenswrapper[4746]: I1211 10:15:25.945479 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad","Type":"ContainerStarted","Data":"77882b699170d593c4444aa87fff7e597c50a2c2dbd68ad77d54732f294d9bd3"} Dec 11 10:15:25 crc kubenswrapper[4746]: I1211 10:15:25.946205 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 11 10:15:25 crc kubenswrapper[4746]: I1211 10:15:25.988154 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.193857549 podStartE2EDuration="8.988124732s" podCreationTimestamp="2025-12-11 10:15:17 +0000 UTC" firstStartedPulling="2025-12-11 10:15:19.086129836 +0000 UTC m=+1291.945993149" lastFinishedPulling="2025-12-11 10:15:24.880397019 +0000 UTC m=+1297.740260332" observedRunningTime="2025-12-11 10:15:25.97498872 +0000 UTC m=+1298.834852033" watchObservedRunningTime="2025-12-11 10:15:25.988124732 +0000 UTC m=+1298.847988045" Dec 11 10:15:26 crc kubenswrapper[4746]: I1211 10:15:26.958088 4746 generic.go:334] "Generic (PLEG): container finished" podID="e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca" containerID="21bcddad0e871d6889940a9b8d1dd13c838566ca365d8a6b601f889ecfc780b3" exitCode=0 Dec 11 10:15:26 crc kubenswrapper[4746]: I1211 10:15:26.958165 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rtspz" event={"ID":"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca","Type":"ContainerDied","Data":"21bcddad0e871d6889940a9b8d1dd13c838566ca365d8a6b601f889ecfc780b3"} Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.290776 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.291407 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.439708 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.581674 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-config-data\") pod \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.581869 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlt4c\" (UniqueName: \"kubernetes.io/projected/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-kube-api-access-mlt4c\") pod \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.582007 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-combined-ca-bundle\") pod \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.582126 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-scripts\") pod \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\" (UID: \"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca\") " Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.593799 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-scripts" (OuterVolumeSpecName: "scripts") pod "e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca" (UID: "e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.594923 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-kube-api-access-mlt4c" (OuterVolumeSpecName: "kube-api-access-mlt4c") pod "e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca" (UID: "e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca"). InnerVolumeSpecName "kube-api-access-mlt4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.624820 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca" (UID: "e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.636126 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-config-data" (OuterVolumeSpecName: "config-data") pod "e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca" (UID: "e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.685870 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.685925 4746 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-scripts\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.685938 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.685952 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlt4c\" (UniqueName: \"kubernetes.io/projected/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca-kube-api-access-mlt4c\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.941551 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.943758 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.948977 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.986673 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-rtspz" event={"ID":"e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca","Type":"ContainerDied","Data":"e51041ce3ac6a7eb406fdfa9cf4ec960eae86fb5bc2b2ca69d6dd3495e1e1ab9"} Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.986759 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e51041ce3ac6a7eb406fdfa9cf4ec960eae86fb5bc2b2ca69d6dd3495e1e1ab9" Dec 11 10:15:28 crc kubenswrapper[4746]: I1211 10:15:28.986866 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-rtspz" Dec 11 10:15:29 crc kubenswrapper[4746]: I1211 10:15:29.009260 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 10:15:29 crc kubenswrapper[4746]: I1211 10:15:29.196302 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:29 crc kubenswrapper[4746]: I1211 10:15:29.197125 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" containerName="nova-api-api" containerID="cri-o://5724739614b87ab1133065730b8aa2fa841ff869bfe50dc7bc7a89f036192594" gracePeriod=30 Dec 11 10:15:29 crc kubenswrapper[4746]: I1211 10:15:29.197500 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" containerName="nova-api-log" containerID="cri-o://b1ba49505b96684a43a6f2103255d01e975179a0b2be3572576fd380d8359304" gracePeriod=30 Dec 11 10:15:29 crc kubenswrapper[4746]: I1211 10:15:29.205368 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": EOF" Dec 11 10:15:29 crc kubenswrapper[4746]: I1211 10:15:29.205565 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.201:8774/\": EOF" Dec 11 10:15:29 crc kubenswrapper[4746]: I1211 10:15:29.257633 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:15:29 crc kubenswrapper[4746]: I1211 10:15:29.257901 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="a9b013e1-4fb8-40ff-a895-a21fff60b543" containerName="nova-scheduler-scheduler" containerID="cri-o://f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" gracePeriod=30 Dec 11 10:15:29 crc kubenswrapper[4746]: I1211 10:15:29.291124 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:29 crc kubenswrapper[4746]: E1211 10:15:29.594404 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:15:29 crc kubenswrapper[4746]: E1211 10:15:29.597105 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:15:29 crc kubenswrapper[4746]: E1211 10:15:29.601231 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:15:29 crc kubenswrapper[4746]: E1211 10:15:29.601286 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a9b013e1-4fb8-40ff-a895-a21fff60b543" containerName="nova-scheduler-scheduler" Dec 11 10:15:30 crc kubenswrapper[4746]: I1211 10:15:30.001021 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed850608-5640-46d7-bc61-6f88fce9e517","Type":"ContainerDied","Data":"b1ba49505b96684a43a6f2103255d01e975179a0b2be3572576fd380d8359304"} Dec 11 10:15:30 crc kubenswrapper[4746]: I1211 10:15:30.004158 4746 generic.go:334] "Generic (PLEG): container finished" podID="ed850608-5640-46d7-bc61-6f88fce9e517" containerID="b1ba49505b96684a43a6f2103255d01e975179a0b2be3572576fd380d8359304" exitCode=143 Dec 11 10:15:31 crc kubenswrapper[4746]: I1211 10:15:31.015733 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-log" containerID="cri-o://71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7" gracePeriod=30 Dec 11 10:15:31 crc kubenswrapper[4746]: I1211 10:15:31.015804 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-metadata" containerID="cri-o://9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca" gracePeriod=30 Dec 11 10:15:32 crc kubenswrapper[4746]: I1211 10:15:32.031194 4746 generic.go:334] "Generic (PLEG): container finished" podID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerID="71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7" exitCode=143 Dec 11 10:15:32 crc kubenswrapper[4746]: I1211 10:15:32.031332 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a1abd9-6106-4686-afdc-db0324ce2e36","Type":"ContainerDied","Data":"71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7"} Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.161915 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:50746->10.217.0.198:8775: read: connection reset by peer" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.161915 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.198:8775/\": read tcp 10.217.0.2:50742->10.217.0.198:8775: read: connection reset by peer" Dec 11 10:15:34 crc kubenswrapper[4746]: E1211 10:15:34.589657 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:15:34 crc kubenswrapper[4746]: E1211 10:15:34.602206 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:15:34 crc kubenswrapper[4746]: E1211 10:15:34.611858 4746 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 11 10:15:34 crc kubenswrapper[4746]: E1211 10:15:34.611990 4746 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="a9b013e1-4fb8-40ff-a895-a21fff60b543" containerName="nova-scheduler-scheduler" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.748815 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.844324 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-config-data\") pod \"53a1abd9-6106-4686-afdc-db0324ce2e36\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.845821 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-nova-metadata-tls-certs\") pod \"53a1abd9-6106-4686-afdc-db0324ce2e36\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.845956 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-combined-ca-bundle\") pod \"53a1abd9-6106-4686-afdc-db0324ce2e36\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.846057 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vptzw\" (UniqueName: \"kubernetes.io/projected/53a1abd9-6106-4686-afdc-db0324ce2e36-kube-api-access-vptzw\") pod \"53a1abd9-6106-4686-afdc-db0324ce2e36\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.846143 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a1abd9-6106-4686-afdc-db0324ce2e36-logs\") pod \"53a1abd9-6106-4686-afdc-db0324ce2e36\" (UID: \"53a1abd9-6106-4686-afdc-db0324ce2e36\") " Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.847301 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53a1abd9-6106-4686-afdc-db0324ce2e36-logs" (OuterVolumeSpecName: "logs") pod "53a1abd9-6106-4686-afdc-db0324ce2e36" (UID: "53a1abd9-6106-4686-afdc-db0324ce2e36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.862505 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a1abd9-6106-4686-afdc-db0324ce2e36-kube-api-access-vptzw" (OuterVolumeSpecName: "kube-api-access-vptzw") pod "53a1abd9-6106-4686-afdc-db0324ce2e36" (UID: "53a1abd9-6106-4686-afdc-db0324ce2e36"). InnerVolumeSpecName "kube-api-access-vptzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.892893 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53a1abd9-6106-4686-afdc-db0324ce2e36" (UID: "53a1abd9-6106-4686-afdc-db0324ce2e36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.897695 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-config-data" (OuterVolumeSpecName: "config-data") pod "53a1abd9-6106-4686-afdc-db0324ce2e36" (UID: "53a1abd9-6106-4686-afdc-db0324ce2e36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.936418 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "53a1abd9-6106-4686-afdc-db0324ce2e36" (UID: "53a1abd9-6106-4686-afdc-db0324ce2e36"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.955323 4746 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.955380 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.955392 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vptzw\" (UniqueName: \"kubernetes.io/projected/53a1abd9-6106-4686-afdc-db0324ce2e36-kube-api-access-vptzw\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.955404 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53a1abd9-6106-4686-afdc-db0324ce2e36-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:34 crc kubenswrapper[4746]: I1211 10:15:34.955418 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a1abd9-6106-4686-afdc-db0324ce2e36-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.112984 4746 generic.go:334] "Generic (PLEG): container finished" podID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerID="9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca" exitCode=0 Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.113020 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a1abd9-6106-4686-afdc-db0324ce2e36","Type":"ContainerDied","Data":"9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca"} Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.113120 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"53a1abd9-6106-4686-afdc-db0324ce2e36","Type":"ContainerDied","Data":"84de99168f6da84f8692728eb2b385610eb2b2739f26d71c9122397c8338ac70"} Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.113150 4746 scope.go:117] "RemoveContainer" containerID="9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.113503 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.146134 4746 scope.go:117] "RemoveContainer" containerID="71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.173617 4746 scope.go:117] "RemoveContainer" containerID="9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca" Dec 11 10:15:35 crc kubenswrapper[4746]: E1211 10:15:35.176305 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca\": container with ID starting with 9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca not found: ID does not exist" containerID="9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.176358 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca"} err="failed to get container status \"9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca\": rpc error: code = NotFound desc = could not find container \"9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca\": container with ID starting with 9fffde0ce1cd75a619e5db7b2535731d6ae0fcdbde2d4addaec3ff6e5fa61aca not found: ID does not exist" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.176396 4746 scope.go:117] "RemoveContainer" containerID="71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7" Dec 11 10:15:35 crc kubenswrapper[4746]: E1211 10:15:35.176902 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7\": container with ID starting with 71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7 not found: ID does not exist" containerID="71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.176946 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7"} err="failed to get container status \"71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7\": rpc error: code = NotFound desc = could not find container \"71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7\": container with ID starting with 71d6957dd340c70334666a4f7c7133e0607aa65dcb1a4be3e488bdbef7332be7 not found: ID does not exist" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.179591 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.199249 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.216699 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:35 crc kubenswrapper[4746]: E1211 10:15:35.217402 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-metadata" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.217434 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-metadata" Dec 11 10:15:35 crc kubenswrapper[4746]: E1211 10:15:35.217458 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9c32d3-f278-46f4-8c02-ddfdf905c39d" containerName="dnsmasq-dns" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.217468 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9c32d3-f278-46f4-8c02-ddfdf905c39d" containerName="dnsmasq-dns" Dec 11 10:15:35 crc kubenswrapper[4746]: E1211 10:15:35.217575 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca" containerName="nova-manage" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.217588 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca" containerName="nova-manage" Dec 11 10:15:35 crc kubenswrapper[4746]: E1211 10:15:35.217605 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc9c32d3-f278-46f4-8c02-ddfdf905c39d" containerName="init" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.217612 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc9c32d3-f278-46f4-8c02-ddfdf905c39d" containerName="init" Dec 11 10:15:35 crc kubenswrapper[4746]: E1211 10:15:35.217635 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-log" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.217647 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-log" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.217904 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca" containerName="nova-manage" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.217940 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-log" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.217960 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc9c32d3-f278-46f4-8c02-ddfdf905c39d" containerName="dnsmasq-dns" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.217978 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" containerName="nova-metadata-metadata" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.219680 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.222412 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.222781 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.231487 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.334791 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dcbca6a-41cb-489b-9632-00e734e2c95b-config-data\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.335880 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dcbca6a-41cb-489b-9632-00e734e2c95b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.336076 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dcbca6a-41cb-489b-9632-00e734e2c95b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.336420 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dcbca6a-41cb-489b-9632-00e734e2c95b-logs\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.336479 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtdds\" (UniqueName: \"kubernetes.io/projected/3dcbca6a-41cb-489b-9632-00e734e2c95b-kube-api-access-jtdds\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.438994 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dcbca6a-41cb-489b-9632-00e734e2c95b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.439163 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dcbca6a-41cb-489b-9632-00e734e2c95b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.439414 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dcbca6a-41cb-489b-9632-00e734e2c95b-logs\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.439447 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtdds\" (UniqueName: \"kubernetes.io/projected/3dcbca6a-41cb-489b-9632-00e734e2c95b-kube-api-access-jtdds\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.439501 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dcbca6a-41cb-489b-9632-00e734e2c95b-config-data\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.440800 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dcbca6a-41cb-489b-9632-00e734e2c95b-logs\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.445800 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dcbca6a-41cb-489b-9632-00e734e2c95b-config-data\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.447308 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dcbca6a-41cb-489b-9632-00e734e2c95b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.451419 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dcbca6a-41cb-489b-9632-00e734e2c95b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.468379 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtdds\" (UniqueName: \"kubernetes.io/projected/3dcbca6a-41cb-489b-9632-00e734e2c95b-kube-api-access-jtdds\") pod \"nova-metadata-0\" (UID: \"3dcbca6a-41cb-489b-9632-00e734e2c95b\") " pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.581584 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.656110 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a1abd9-6106-4686-afdc-db0324ce2e36" path="/var/lib/kubelet/pods/53a1abd9-6106-4686-afdc-db0324ce2e36/volumes" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.789131 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.955340 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-config-data\") pod \"a9b013e1-4fb8-40ff-a895-a21fff60b543\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.957454 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-combined-ca-bundle\") pod \"a9b013e1-4fb8-40ff-a895-a21fff60b543\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.957501 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tfq5\" (UniqueName: \"kubernetes.io/projected/a9b013e1-4fb8-40ff-a895-a21fff60b543-kube-api-access-6tfq5\") pod \"a9b013e1-4fb8-40ff-a895-a21fff60b543\" (UID: \"a9b013e1-4fb8-40ff-a895-a21fff60b543\") " Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.966183 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b013e1-4fb8-40ff-a895-a21fff60b543-kube-api-access-6tfq5" (OuterVolumeSpecName: "kube-api-access-6tfq5") pod "a9b013e1-4fb8-40ff-a895-a21fff60b543" (UID: "a9b013e1-4fb8-40ff-a895-a21fff60b543"). InnerVolumeSpecName "kube-api-access-6tfq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.997092 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-config-data" (OuterVolumeSpecName: "config-data") pod "a9b013e1-4fb8-40ff-a895-a21fff60b543" (UID: "a9b013e1-4fb8-40ff-a895-a21fff60b543"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:35 crc kubenswrapper[4746]: I1211 10:15:35.999433 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9b013e1-4fb8-40ff-a895-a21fff60b543" (UID: "a9b013e1-4fb8-40ff-a895-a21fff60b543"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.060608 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.060662 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b013e1-4fb8-40ff-a895-a21fff60b543-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.060684 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tfq5\" (UniqueName: \"kubernetes.io/projected/a9b013e1-4fb8-40ff-a895-a21fff60b543-kube-api-access-6tfq5\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.130327 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9b013e1-4fb8-40ff-a895-a21fff60b543" containerID="f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" exitCode=0 Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.130416 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.130430 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a9b013e1-4fb8-40ff-a895-a21fff60b543","Type":"ContainerDied","Data":"f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f"} Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.130469 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"a9b013e1-4fb8-40ff-a895-a21fff60b543","Type":"ContainerDied","Data":"bf9f26240b7970f1600d117d95d7232bc63dc5db33d5982c38fd32b4d4e442b9"} Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.130492 4746 scope.go:117] "RemoveContainer" containerID="f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.147582 4746 generic.go:334] "Generic (PLEG): container finished" podID="ed850608-5640-46d7-bc61-6f88fce9e517" containerID="5724739614b87ab1133065730b8aa2fa841ff869bfe50dc7bc7a89f036192594" exitCode=0 Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.147636 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed850608-5640-46d7-bc61-6f88fce9e517","Type":"ContainerDied","Data":"5724739614b87ab1133065730b8aa2fa841ff869bfe50dc7bc7a89f036192594"} Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.184036 4746 scope.go:117] "RemoveContainer" containerID="f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" Dec 11 10:15:36 crc kubenswrapper[4746]: E1211 10:15:36.209183 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f\": container with ID starting with f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f not found: ID does not exist" containerID="f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.209258 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f"} err="failed to get container status \"f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f\": rpc error: code = NotFound desc = could not find container \"f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f\": container with ID starting with f37c127aed74e77fae3f3156efa10b54726f619270e6d593c71a007eb975180f not found: ID does not exist" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.236128 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.247887 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.262372 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:15:36 crc kubenswrapper[4746]: E1211 10:15:36.263627 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b013e1-4fb8-40ff-a895-a21fff60b543" containerName="nova-scheduler-scheduler" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.263656 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b013e1-4fb8-40ff-a895-a21fff60b543" containerName="nova-scheduler-scheduler" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.264076 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b013e1-4fb8-40ff-a895-a21fff60b543" containerName="nova-scheduler-scheduler" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.265844 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.272573 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.275262 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5486ad2e-b2db-4967-8308-592b79065f54-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5486ad2e-b2db-4967-8308-592b79065f54\") " pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.275733 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5486ad2e-b2db-4967-8308-592b79065f54-config-data\") pod \"nova-scheduler-0\" (UID: \"5486ad2e-b2db-4967-8308-592b79065f54\") " pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.276024 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z652\" (UniqueName: \"kubernetes.io/projected/5486ad2e-b2db-4967-8308-592b79065f54-kube-api-access-4z652\") pod \"nova-scheduler-0\" (UID: \"5486ad2e-b2db-4967-8308-592b79065f54\") " pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.278582 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.292004 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.381708 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5486ad2e-b2db-4967-8308-592b79065f54-config-data\") pod \"nova-scheduler-0\" (UID: \"5486ad2e-b2db-4967-8308-592b79065f54\") " pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.381802 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z652\" (UniqueName: \"kubernetes.io/projected/5486ad2e-b2db-4967-8308-592b79065f54-kube-api-access-4z652\") pod \"nova-scheduler-0\" (UID: \"5486ad2e-b2db-4967-8308-592b79065f54\") " pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.382010 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5486ad2e-b2db-4967-8308-592b79065f54-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5486ad2e-b2db-4967-8308-592b79065f54\") " pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.396080 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5486ad2e-b2db-4967-8308-592b79065f54-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"5486ad2e-b2db-4967-8308-592b79065f54\") " pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.396560 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5486ad2e-b2db-4967-8308-592b79065f54-config-data\") pod \"nova-scheduler-0\" (UID: \"5486ad2e-b2db-4967-8308-592b79065f54\") " pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.405587 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z652\" (UniqueName: \"kubernetes.io/projected/5486ad2e-b2db-4967-8308-592b79065f54-kube-api-access-4z652\") pod \"nova-scheduler-0\" (UID: \"5486ad2e-b2db-4967-8308-592b79065f54\") " pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.612502 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.670239 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.794454 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rhgr\" (UniqueName: \"kubernetes.io/projected/ed850608-5640-46d7-bc61-6f88fce9e517-kube-api-access-4rhgr\") pod \"ed850608-5640-46d7-bc61-6f88fce9e517\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.794720 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed850608-5640-46d7-bc61-6f88fce9e517-logs\") pod \"ed850608-5640-46d7-bc61-6f88fce9e517\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.794869 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-internal-tls-certs\") pod \"ed850608-5640-46d7-bc61-6f88fce9e517\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.794961 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-combined-ca-bundle\") pod \"ed850608-5640-46d7-bc61-6f88fce9e517\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.795127 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-public-tls-certs\") pod \"ed850608-5640-46d7-bc61-6f88fce9e517\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.795309 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed850608-5640-46d7-bc61-6f88fce9e517-logs" (OuterVolumeSpecName: "logs") pod "ed850608-5640-46d7-bc61-6f88fce9e517" (UID: "ed850608-5640-46d7-bc61-6f88fce9e517"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.795361 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-config-data\") pod \"ed850608-5640-46d7-bc61-6f88fce9e517\" (UID: \"ed850608-5640-46d7-bc61-6f88fce9e517\") " Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.804083 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed850608-5640-46d7-bc61-6f88fce9e517-kube-api-access-4rhgr" (OuterVolumeSpecName: "kube-api-access-4rhgr") pod "ed850608-5640-46d7-bc61-6f88fce9e517" (UID: "ed850608-5640-46d7-bc61-6f88fce9e517"). InnerVolumeSpecName "kube-api-access-4rhgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.807747 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rhgr\" (UniqueName: \"kubernetes.io/projected/ed850608-5640-46d7-bc61-6f88fce9e517-kube-api-access-4rhgr\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.807796 4746 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ed850608-5640-46d7-bc61-6f88fce9e517-logs\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.855400 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed850608-5640-46d7-bc61-6f88fce9e517" (UID: "ed850608-5640-46d7-bc61-6f88fce9e517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.883666 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-config-data" (OuterVolumeSpecName: "config-data") pod "ed850608-5640-46d7-bc61-6f88fce9e517" (UID: "ed850608-5640-46d7-bc61-6f88fce9e517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.890540 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ed850608-5640-46d7-bc61-6f88fce9e517" (UID: "ed850608-5640-46d7-bc61-6f88fce9e517"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.910776 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.911291 4746 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.911308 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:36 crc kubenswrapper[4746]: I1211 10:15:36.912237 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ed850608-5640-46d7-bc61-6f88fce9e517" (UID: "ed850608-5640-46d7-bc61-6f88fce9e517"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.011958 4746 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ed850608-5640-46d7-bc61-6f88fce9e517-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.164218 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ed850608-5640-46d7-bc61-6f88fce9e517","Type":"ContainerDied","Data":"ccb10b728ef1971492f21ef0ee0535f5ebd23209e9af2a1dd8444ff8593cd6af"} Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.164296 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.164782 4746 scope.go:117] "RemoveContainer" containerID="5724739614b87ab1133065730b8aa2fa841ff869bfe50dc7bc7a89f036192594" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.173467 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3dcbca6a-41cb-489b-9632-00e734e2c95b","Type":"ContainerStarted","Data":"924c7af88b13b413f8e8b42d7483a8c4c4af96aa3ccb7716f5e797d1f59a74c6"} Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.173539 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3dcbca6a-41cb-489b-9632-00e734e2c95b","Type":"ContainerStarted","Data":"b46e30d2057a96e1e236bf4612801e79083611b444bf8b51754c804e8541fe6b"} Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.173549 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3dcbca6a-41cb-489b-9632-00e734e2c95b","Type":"ContainerStarted","Data":"7e4df9757d7bab70f87e365359ba44bfabaf22149af1e6064123510c0f736978"} Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.215504 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.2154813 podStartE2EDuration="2.2154813s" podCreationTimestamp="2025-12-11 10:15:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:15:37.212738837 +0000 UTC m=+1310.072602150" watchObservedRunningTime="2025-12-11 10:15:37.2154813 +0000 UTC m=+1310.075344613" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.242184 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.259972 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.267894 4746 scope.go:117] "RemoveContainer" containerID="b1ba49505b96684a43a6f2103255d01e975179a0b2be3572576fd380d8359304" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.299186 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:37 crc kubenswrapper[4746]: E1211 10:15:37.300362 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" containerName="nova-api-api" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.300586 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" containerName="nova-api-api" Dec 11 10:15:37 crc kubenswrapper[4746]: E1211 10:15:37.300778 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" containerName="nova-api-log" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.300855 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" containerName="nova-api-log" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.301270 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" containerName="nova-api-log" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.301378 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" containerName="nova-api-api" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.303213 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.308176 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.308499 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.308649 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.316072 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.332209 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.420755 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015a8233-ebde-4703-a8bb-81267822daaa-logs\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.420853 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-config-data\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.420881 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-public-tls-certs\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.420916 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kprhz\" (UniqueName: \"kubernetes.io/projected/015a8233-ebde-4703-a8bb-81267822daaa-kube-api-access-kprhz\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.421453 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.421658 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.523991 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.524232 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.524311 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015a8233-ebde-4703-a8bb-81267822daaa-logs\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.524407 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-config-data\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.524447 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-public-tls-certs\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.524510 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kprhz\" (UniqueName: \"kubernetes.io/projected/015a8233-ebde-4703-a8bb-81267822daaa-kube-api-access-kprhz\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.525798 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/015a8233-ebde-4703-a8bb-81267822daaa-logs\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.532695 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.533722 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-config-data\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.535628 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-internal-tls-certs\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.538837 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/015a8233-ebde-4703-a8bb-81267822daaa-public-tls-certs\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.547865 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kprhz\" (UniqueName: \"kubernetes.io/projected/015a8233-ebde-4703-a8bb-81267822daaa-kube-api-access-kprhz\") pod \"nova-api-0\" (UID: \"015a8233-ebde-4703-a8bb-81267822daaa\") " pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.617816 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.649132 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b013e1-4fb8-40ff-a895-a21fff60b543" path="/var/lib/kubelet/pods/a9b013e1-4fb8-40ff-a895-a21fff60b543/volumes" Dec 11 10:15:37 crc kubenswrapper[4746]: I1211 10:15:37.650360 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed850608-5640-46d7-bc61-6f88fce9e517" path="/var/lib/kubelet/pods/ed850608-5640-46d7-bc61-6f88fce9e517/volumes" Dec 11 10:15:38 crc kubenswrapper[4746]: I1211 10:15:38.167332 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 11 10:15:38 crc kubenswrapper[4746]: I1211 10:15:38.184852 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"015a8233-ebde-4703-a8bb-81267822daaa","Type":"ContainerStarted","Data":"3d25b0386db37f350e98972daebddf2acf4a96287eba5f9f053a591720a31e52"} Dec 11 10:15:38 crc kubenswrapper[4746]: I1211 10:15:38.187361 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5486ad2e-b2db-4967-8308-592b79065f54","Type":"ContainerStarted","Data":"222ba892b57ffa26052ab285a8164413441c41745d38550c605d6e88d456124b"} Dec 11 10:15:38 crc kubenswrapper[4746]: I1211 10:15:38.187390 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"5486ad2e-b2db-4967-8308-592b79065f54","Type":"ContainerStarted","Data":"5d85fbcbe06ed20a825714da5f26fc93b1c7ea29ba63c241142752251d2f9844"} Dec 11 10:15:39 crc kubenswrapper[4746]: I1211 10:15:39.204783 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"015a8233-ebde-4703-a8bb-81267822daaa","Type":"ContainerStarted","Data":"5f8d60d883305cb058b413dd2f81577ae367f7f161bd50f7d0cca9b71eac738b"} Dec 11 10:15:40 crc kubenswrapper[4746]: I1211 10:15:40.221540 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"015a8233-ebde-4703-a8bb-81267822daaa","Type":"ContainerStarted","Data":"49c83ffb3c786665e5795a66f3ce0b8788635f8a1ba14b559db05eb560e511b3"} Dec 11 10:15:40 crc kubenswrapper[4746]: I1211 10:15:40.248193 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.248172351 podStartE2EDuration="4.248172351s" podCreationTimestamp="2025-12-11 10:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:15:38.210404195 +0000 UTC m=+1311.070267508" watchObservedRunningTime="2025-12-11 10:15:40.248172351 +0000 UTC m=+1313.108035664" Dec 11 10:15:40 crc kubenswrapper[4746]: I1211 10:15:40.248386 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.248380647 podStartE2EDuration="3.248380647s" podCreationTimestamp="2025-12-11 10:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:15:40.244073601 +0000 UTC m=+1313.103936914" watchObservedRunningTime="2025-12-11 10:15:40.248380647 +0000 UTC m=+1313.108243960" Dec 11 10:15:40 crc kubenswrapper[4746]: I1211 10:15:40.582183 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 10:15:40 crc kubenswrapper[4746]: I1211 10:15:40.582273 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 11 10:15:41 crc kubenswrapper[4746]: I1211 10:15:41.670913 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 11 10:15:45 crc kubenswrapper[4746]: I1211 10:15:45.582390 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 10:15:45 crc kubenswrapper[4746]: I1211 10:15:45.584668 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 11 10:15:46 crc kubenswrapper[4746]: I1211 10:15:46.600247 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3dcbca6a-41cb-489b-9632-00e734e2c95b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 10:15:46 crc kubenswrapper[4746]: I1211 10:15:46.600264 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3dcbca6a-41cb-489b-9632-00e734e2c95b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.203:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 10:15:46 crc kubenswrapper[4746]: I1211 10:15:46.670517 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 11 10:15:46 crc kubenswrapper[4746]: I1211 10:15:46.703890 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 11 10:15:47 crc kubenswrapper[4746]: I1211 10:15:47.351759 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 11 10:15:47 crc kubenswrapper[4746]: I1211 10:15:47.618941 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:15:47 crc kubenswrapper[4746]: I1211 10:15:47.620021 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 11 10:15:48 crc kubenswrapper[4746]: I1211 10:15:48.293483 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 11 10:15:48 crc kubenswrapper[4746]: I1211 10:15:48.641348 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="015a8233-ebde-4703-a8bb-81267822daaa" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 10:15:48 crc kubenswrapper[4746]: I1211 10:15:48.641383 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="015a8233-ebde-4703-a8bb-81267822daaa" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 11 10:15:55 crc kubenswrapper[4746]: I1211 10:15:55.589394 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 10:15:55 crc kubenswrapper[4746]: I1211 10:15:55.590360 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 11 10:15:55 crc kubenswrapper[4746]: I1211 10:15:55.596480 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 10:15:55 crc kubenswrapper[4746]: I1211 10:15:55.596556 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 11 10:15:57 crc kubenswrapper[4746]: I1211 10:15:57.627957 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 10:15:57 crc kubenswrapper[4746]: I1211 10:15:57.628601 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 11 10:15:57 crc kubenswrapper[4746]: I1211 10:15:57.629230 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 10:15:57 crc kubenswrapper[4746]: I1211 10:15:57.650372 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 11 10:15:57 crc kubenswrapper[4746]: I1211 10:15:57.650466 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 10:15:57 crc kubenswrapper[4746]: I1211 10:15:57.650563 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 11 10:16:06 crc kubenswrapper[4746]: I1211 10:16:06.212277 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:16:07 crc kubenswrapper[4746]: I1211 10:16:07.487567 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:16:11 crc kubenswrapper[4746]: I1211 10:16:11.683445 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" containerName="rabbitmq" containerID="cri-o://94468ffebda107fe33ff04876efea461aafc7a246c310c3e28e2f6fe4862c01f" gracePeriod=604795 Dec 11 10:16:12 crc kubenswrapper[4746]: I1211 10:16:12.454241 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" containerName="rabbitmq" containerID="cri-o://c916edefa54aed3bb5e9ed0be01019855420ec58cdee4f2e41262d107563ede0" gracePeriod=604796 Dec 11 10:16:13 crc kubenswrapper[4746]: I1211 10:16:13.744763 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.95:5671: connect: connection refused" Dec 11 10:16:14 crc kubenswrapper[4746]: I1211 10:16:14.561791 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.96:5671: connect: connection refused" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.723500 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-d558885bc-4c6n5"] Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.726158 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.738821 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.742345 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-4c6n5"] Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.768664 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-config\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.768721 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.768750 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.768772 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-svc\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.768790 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.768870 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.769084 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfstn\" (UniqueName: \"kubernetes.io/projected/36593729-76b5-4f4a-b617-4943c1d51902-kube-api-access-hfstn\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.871737 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.871848 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfstn\" (UniqueName: \"kubernetes.io/projected/36593729-76b5-4f4a-b617-4943c1d51902-kube-api-access-hfstn\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.871933 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-config\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.871973 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.872000 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.872030 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-svc\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.872069 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.873327 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-sb\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.873346 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-nb\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.874138 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-openstack-edpm-ipam\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.874226 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-swift-storage-0\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.874882 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-svc\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.874903 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-config\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:17 crc kubenswrapper[4746]: I1211 10:16:17.895623 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfstn\" (UniqueName: \"kubernetes.io/projected/36593729-76b5-4f4a-b617-4943c1d51902-kube-api-access-hfstn\") pod \"dnsmasq-dns-d558885bc-4c6n5\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.066480 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:18 crc kubenswrapper[4746]: E1211 10:16:18.181253 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b37a306_a93c_4cb2_9a15_888df45f0ca7.slice/crio-conmon-94468ffebda107fe33ff04876efea461aafc7a246c310c3e28e2f6fe4862c01f.scope\": RecentStats: unable to find data in memory cache]" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.622584 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-4c6n5"] Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.737238 4746 generic.go:334] "Generic (PLEG): container finished" podID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" containerID="94468ffebda107fe33ff04876efea461aafc7a246c310c3e28e2f6fe4862c01f" exitCode=0 Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.737301 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6b37a306-a93c-4cb2-9a15-888df45f0ca7","Type":"ContainerDied","Data":"94468ffebda107fe33ff04876efea461aafc7a246c310c3e28e2f6fe4862c01f"} Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.739448 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" event={"ID":"36593729-76b5-4f4a-b617-4943c1d51902","Type":"ContainerStarted","Data":"2cb1e0218c1d7cbd1840cd9c7b259306bb378be1ff67098cd3cf2bc96094ef11"} Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.833084 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.895759 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-tls\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.895830 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b37a306-a93c-4cb2-9a15-888df45f0ca7-pod-info\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.896015 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-server-conf\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.896091 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-erlang-cookie\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.896150 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-plugins-conf\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.896222 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b37a306-a93c-4cb2-9a15-888df45f0ca7-erlang-cookie-secret\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.896253 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-config-data\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.896279 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-plugins\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.896337 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4qv9\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-kube-api-access-f4qv9\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.896369 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.896422 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-confd\") pod \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\" (UID: \"6b37a306-a93c-4cb2-9a15-888df45f0ca7\") " Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.900183 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.903491 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b37a306-a93c-4cb2-9a15-888df45f0ca7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.906390 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.908987 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.909066 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.913172 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.918312 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-kube-api-access-f4qv9" (OuterVolumeSpecName: "kube-api-access-f4qv9") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "kube-api-access-f4qv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.921272 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/6b37a306-a93c-4cb2-9a15-888df45f0ca7-pod-info" (OuterVolumeSpecName: "pod-info") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 10:16:18 crc kubenswrapper[4746]: I1211 10:16:18.939866 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-config-data" (OuterVolumeSpecName: "config-data") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.000969 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.001006 4746 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.001022 4746 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6b37a306-a93c-4cb2-9a15-888df45f0ca7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.001032 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.001040 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.001074 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4qv9\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-kube-api-access-f4qv9\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.001103 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.001116 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.001127 4746 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6b37a306-a93c-4cb2-9a15-888df45f0ca7-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.036268 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-server-conf" (OuterVolumeSpecName: "server-conf") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.048924 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.142731 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.143106 4746 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6b37a306-a93c-4cb2-9a15-888df45f0ca7-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.189925 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "6b37a306-a93c-4cb2-9a15-888df45f0ca7" (UID: "6b37a306-a93c-4cb2-9a15-888df45f0ca7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.253453 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6b37a306-a93c-4cb2-9a15-888df45f0ca7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.754908 4746 generic.go:334] "Generic (PLEG): container finished" podID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" containerID="c916edefa54aed3bb5e9ed0be01019855420ec58cdee4f2e41262d107563ede0" exitCode=0 Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.754974 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c896f2d4-ac49-431e-b8c5-eda758cfa7cd","Type":"ContainerDied","Data":"c916edefa54aed3bb5e9ed0be01019855420ec58cdee4f2e41262d107563ede0"} Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.759814 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"6b37a306-a93c-4cb2-9a15-888df45f0ca7","Type":"ContainerDied","Data":"bb0486dfe115c29def0f4602e0c24f7da81e35e187c78cb402e058ba91b1f529"} Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.759870 4746 scope.go:117] "RemoveContainer" containerID="94468ffebda107fe33ff04876efea461aafc7a246c310c3e28e2f6fe4862c01f" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.759874 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.762169 4746 generic.go:334] "Generic (PLEG): container finished" podID="36593729-76b5-4f4a-b617-4943c1d51902" containerID="a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29" exitCode=0 Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.762226 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" event={"ID":"36593729-76b5-4f4a-b617-4943c1d51902","Type":"ContainerDied","Data":"a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29"} Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.827803 4746 scope.go:117] "RemoveContainer" containerID="c2000b97a92c70f413a797f74ad7af2d1ad478ddb51b4ad876910c803d0020d1" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.835263 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.848452 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.861795 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:16:19 crc kubenswrapper[4746]: E1211 10:16:19.862638 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" containerName="rabbitmq" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.862718 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" containerName="rabbitmq" Dec 11 10:16:19 crc kubenswrapper[4746]: E1211 10:16:19.862801 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" containerName="setup-container" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.862892 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" containerName="setup-container" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.863209 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" containerName="rabbitmq" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.864575 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.868966 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.869221 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.869411 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.869565 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-sqrz8" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.869707 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.869853 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.870793 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.888316 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.971011 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.971871 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.972091 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c61eb65-bc9f-4b9f-84c8-286e25295809-config-data\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.972208 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.972316 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.972349 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c61eb65-bc9f-4b9f-84c8-286e25295809-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.972422 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c61eb65-bc9f-4b9f-84c8-286e25295809-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.972562 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c61eb65-bc9f-4b9f-84c8-286e25295809-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.972885 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppjp5\" (UniqueName: \"kubernetes.io/projected/9c61eb65-bc9f-4b9f-84c8-286e25295809-kube-api-access-ppjp5\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.973066 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:19 crc kubenswrapper[4746]: I1211 10:16:19.973177 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c61eb65-bc9f-4b9f-84c8-286e25295809-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075069 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c61eb65-bc9f-4b9f-84c8-286e25295809-config-data\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075140 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075180 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075204 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c61eb65-bc9f-4b9f-84c8-286e25295809-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075237 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c61eb65-bc9f-4b9f-84c8-286e25295809-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075282 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c61eb65-bc9f-4b9f-84c8-286e25295809-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075322 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppjp5\" (UniqueName: \"kubernetes.io/projected/9c61eb65-bc9f-4b9f-84c8-286e25295809-kube-api-access-ppjp5\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075357 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075384 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c61eb65-bc9f-4b9f-84c8-286e25295809-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075410 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.075450 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.076833 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.080138 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9c61eb65-bc9f-4b9f-84c8-286e25295809-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.080932 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9c61eb65-bc9f-4b9f-84c8-286e25295809-config-data\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.082487 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9c61eb65-bc9f-4b9f-84c8-286e25295809-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.084207 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.085249 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.085813 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9c61eb65-bc9f-4b9f-84c8-286e25295809-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.090219 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.090414 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9c61eb65-bc9f-4b9f-84c8-286e25295809-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.091843 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9c61eb65-bc9f-4b9f-84c8-286e25295809-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.102728 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppjp5\" (UniqueName: \"kubernetes.io/projected/9c61eb65-bc9f-4b9f-84c8-286e25295809-kube-api-access-ppjp5\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.135385 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"9c61eb65-bc9f-4b9f-84c8-286e25295809\") " pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.297501 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.322446 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395141 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-plugins-conf\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395273 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-plugins\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395313 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-tls\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395435 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-erlang-cookie\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395465 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txqqq\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-kube-api-access-txqqq\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395515 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-pod-info\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395540 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-erlang-cookie-secret\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395566 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-confd\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395591 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-config-data\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395710 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-server-conf\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.395758 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\" (UID: \"c896f2d4-ac49-431e-b8c5-eda758cfa7cd\") " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.399108 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.400175 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.401443 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.406403 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.419974 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-kube-api-access-txqqq" (OuterVolumeSpecName: "kube-api-access-txqqq") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "kube-api-access-txqqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.426316 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.426564 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.432513 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-pod-info" (OuterVolumeSpecName: "pod-info") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.441578 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-config-data" (OuterVolumeSpecName: "config-data") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.498367 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.498405 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txqqq\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-kube-api-access-txqqq\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.498415 4746 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-pod-info\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.498431 4746 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.498441 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.498472 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.498481 4746 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.498490 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.498499 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.500712 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-server-conf" (OuterVolumeSpecName: "server-conf") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.928618 4746 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-server-conf\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.952828 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.983762 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" event={"ID":"36593729-76b5-4f4a-b617-4943c1d51902","Type":"ContainerStarted","Data":"aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e"} Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.984338 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.992762 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c896f2d4-ac49-431e-b8c5-eda758cfa7cd","Type":"ContainerDied","Data":"07fd51cae742b38eff0481744bb9b62f34c804ed402d909b24ae097ea1061312"} Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.992845 4746 scope.go:117] "RemoveContainer" containerID="c916edefa54aed3bb5e9ed0be01019855420ec58cdee4f2e41262d107563ede0" Dec 11 10:16:20 crc kubenswrapper[4746]: I1211 10:16:20.992999 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.011812 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" podStartSLOduration=4.011792543 podStartE2EDuration="4.011792543s" podCreationTimestamp="2025-12-11 10:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:16:21.011099884 +0000 UTC m=+1353.870963197" watchObservedRunningTime="2025-12-11 10:16:21.011792543 +0000 UTC m=+1353.871655856" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.023644 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c896f2d4-ac49-431e-b8c5-eda758cfa7cd" (UID: "c896f2d4-ac49-431e-b8c5-eda758cfa7cd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.030589 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.030627 4746 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c896f2d4-ac49-431e-b8c5-eda758cfa7cd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.052934 4746 scope.go:117] "RemoveContainer" containerID="52784d2bd3120e89e64f8e4d1ef55a5e083b21b2ce171165f5bf21bf0df40c74" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.243271 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 11 10:16:21 crc kubenswrapper[4746]: W1211 10:16:21.246809 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c61eb65_bc9f_4b9f_84c8_286e25295809.slice/crio-52d137170947af02bf3a98eb70f1ec9066cf54da12a8ddc4dc03cac8ffed8863 WatchSource:0}: Error finding container 52d137170947af02bf3a98eb70f1ec9066cf54da12a8ddc4dc03cac8ffed8863: Status 404 returned error can't find the container with id 52d137170947af02bf3a98eb70f1ec9066cf54da12a8ddc4dc03cac8ffed8863 Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.341475 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.353207 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.371234 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:16:21 crc kubenswrapper[4746]: E1211 10:16:21.371948 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" containerName="rabbitmq" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.371973 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" containerName="rabbitmq" Dec 11 10:16:21 crc kubenswrapper[4746]: E1211 10:16:21.371995 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" containerName="setup-container" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.372006 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" containerName="setup-container" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.372347 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" containerName="rabbitmq" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.373953 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.382900 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.383584 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.383678 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bkxl5" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.383886 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.384625 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.384988 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.387366 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.391175 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.440339 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.440441 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.440478 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e96da7ab-1e2a-4f9c-bb48-9955198a646a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.440512 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.440779 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e96da7ab-1e2a-4f9c-bb48-9955198a646a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.440876 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e96da7ab-1e2a-4f9c-bb48-9955198a646a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.441019 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frf4\" (UniqueName: \"kubernetes.io/projected/e96da7ab-1e2a-4f9c-bb48-9955198a646a-kube-api-access-4frf4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.441133 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.441371 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e96da7ab-1e2a-4f9c-bb48-9955198a646a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.441598 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.441665 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e96da7ab-1e2a-4f9c-bb48-9955198a646a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.544549 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e96da7ab-1e2a-4f9c-bb48-9955198a646a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.544999 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.545028 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.545063 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e96da7ab-1e2a-4f9c-bb48-9955198a646a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.545087 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.545124 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e96da7ab-1e2a-4f9c-bb48-9955198a646a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.545149 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e96da7ab-1e2a-4f9c-bb48-9955198a646a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.545177 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4frf4\" (UniqueName: \"kubernetes.io/projected/e96da7ab-1e2a-4f9c-bb48-9955198a646a-kube-api-access-4frf4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.545204 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.545251 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e96da7ab-1e2a-4f9c-bb48-9955198a646a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.545303 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.552401 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.553065 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e96da7ab-1e2a-4f9c-bb48-9955198a646a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.553512 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.576254 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.580147 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e96da7ab-1e2a-4f9c-bb48-9955198a646a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.580587 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.581274 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e96da7ab-1e2a-4f9c-bb48-9955198a646a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.584909 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e96da7ab-1e2a-4f9c-bb48-9955198a646a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.585651 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e96da7ab-1e2a-4f9c-bb48-9955198a646a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.586704 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e96da7ab-1e2a-4f9c-bb48-9955198a646a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.600002 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.607432 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4frf4\" (UniqueName: \"kubernetes.io/projected/e96da7ab-1e2a-4f9c-bb48-9955198a646a-kube-api-access-4frf4\") pod \"rabbitmq-cell1-server-0\" (UID: \"e96da7ab-1e2a-4f9c-bb48-9955198a646a\") " pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.701191 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.755261 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b37a306-a93c-4cb2-9a15-888df45f0ca7" path="/var/lib/kubelet/pods/6b37a306-a93c-4cb2-9a15-888df45f0ca7/volumes" Dec 11 10:16:21 crc kubenswrapper[4746]: I1211 10:16:21.756411 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c896f2d4-ac49-431e-b8c5-eda758cfa7cd" path="/var/lib/kubelet/pods/c896f2d4-ac49-431e-b8c5-eda758cfa7cd/volumes" Dec 11 10:16:22 crc kubenswrapper[4746]: I1211 10:16:22.009773 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c61eb65-bc9f-4b9f-84c8-286e25295809","Type":"ContainerStarted","Data":"52d137170947af02bf3a98eb70f1ec9066cf54da12a8ddc4dc03cac8ffed8863"} Dec 11 10:16:22 crc kubenswrapper[4746]: I1211 10:16:22.314651 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 11 10:16:23 crc kubenswrapper[4746]: I1211 10:16:23.026454 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e96da7ab-1e2a-4f9c-bb48-9955198a646a","Type":"ContainerStarted","Data":"be69e6b4870361783bbfbcfeadbf8566b2e7301fbb9c73f6dce787029bf37675"} Dec 11 10:16:24 crc kubenswrapper[4746]: I1211 10:16:24.041758 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c61eb65-bc9f-4b9f-84c8-286e25295809","Type":"ContainerStarted","Data":"4800418a3ced3f9869ed6ff07cff584036277b5038c1c13fa1d75ee998ce4741"} Dec 11 10:16:25 crc kubenswrapper[4746]: I1211 10:16:25.059186 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e96da7ab-1e2a-4f9c-bb48-9955198a646a","Type":"ContainerStarted","Data":"e54c035a19ed46388db0d2f86dd9783ecdbb5851a6488c4a539b66e007890e26"} Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.068954 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.161425 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-n7t48"] Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.161974 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" podUID="32bb2966-d412-43e4-978e-21dc59433b4c" containerName="dnsmasq-dns" containerID="cri-o://5c7912a5988f292fb586e51c36eb144d833e6e4d61e3ba721601e58f7f8341a4" gracePeriod=10 Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.742245 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-wfn8b"] Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.747911 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.755938 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-wfn8b"] Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.797942 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.798064 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.798152 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.798205 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.798260 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-config\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.798300 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.798457 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z527w\" (UniqueName: \"kubernetes.io/projected/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-kube-api-access-z527w\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.900095 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z527w\" (UniqueName: \"kubernetes.io/projected/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-kube-api-access-z527w\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.900156 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.900191 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.900249 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.900288 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.900319 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-config\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.900347 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.901659 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-ovsdbserver-sb\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.901697 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-openstack-edpm-ipam\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.902177 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-ovsdbserver-nb\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.902216 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-dns-svc\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.902391 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-dns-swift-storage-0\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.903252 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-config\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:28 crc kubenswrapper[4746]: I1211 10:16:28.947284 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z527w\" (UniqueName: \"kubernetes.io/projected/0a51bb6a-6ca0-4e2d-8427-70e92cd4730d-kube-api-access-z527w\") pod \"dnsmasq-dns-78c64bc9c5-wfn8b\" (UID: \"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d\") " pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.080558 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.123035 4746 generic.go:334] "Generic (PLEG): container finished" podID="32bb2966-d412-43e4-978e-21dc59433b4c" containerID="5c7912a5988f292fb586e51c36eb144d833e6e4d61e3ba721601e58f7f8341a4" exitCode=0 Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.123203 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" event={"ID":"32bb2966-d412-43e4-978e-21dc59433b4c","Type":"ContainerDied","Data":"5c7912a5988f292fb586e51c36eb144d833e6e4d61e3ba721601e58f7f8341a4"} Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.376566 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.514154 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-swift-storage-0\") pod \"32bb2966-d412-43e4-978e-21dc59433b4c\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.514377 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-nb\") pod \"32bb2966-d412-43e4-978e-21dc59433b4c\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.514408 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-config\") pod \"32bb2966-d412-43e4-978e-21dc59433b4c\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.515194 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-svc\") pod \"32bb2966-d412-43e4-978e-21dc59433b4c\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.515266 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j69nq\" (UniqueName: \"kubernetes.io/projected/32bb2966-d412-43e4-978e-21dc59433b4c-kube-api-access-j69nq\") pod \"32bb2966-d412-43e4-978e-21dc59433b4c\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.515407 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-sb\") pod \"32bb2966-d412-43e4-978e-21dc59433b4c\" (UID: \"32bb2966-d412-43e4-978e-21dc59433b4c\") " Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.534010 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32bb2966-d412-43e4-978e-21dc59433b4c-kube-api-access-j69nq" (OuterVolumeSpecName: "kube-api-access-j69nq") pod "32bb2966-d412-43e4-978e-21dc59433b4c" (UID: "32bb2966-d412-43e4-978e-21dc59433b4c"). InnerVolumeSpecName "kube-api-access-j69nq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.577972 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "32bb2966-d412-43e4-978e-21dc59433b4c" (UID: "32bb2966-d412-43e4-978e-21dc59433b4c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.583866 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "32bb2966-d412-43e4-978e-21dc59433b4c" (UID: "32bb2966-d412-43e4-978e-21dc59433b4c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.584411 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-config" (OuterVolumeSpecName: "config") pod "32bb2966-d412-43e4-978e-21dc59433b4c" (UID: "32bb2966-d412-43e4-978e-21dc59433b4c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.596030 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "32bb2966-d412-43e4-978e-21dc59433b4c" (UID: "32bb2966-d412-43e4-978e-21dc59433b4c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.596736 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "32bb2966-d412-43e4-978e-21dc59433b4c" (UID: "32bb2966-d412-43e4-978e-21dc59433b4c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.624382 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.624704 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.624983 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.625103 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.625170 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j69nq\" (UniqueName: \"kubernetes.io/projected/32bb2966-d412-43e4-978e-21dc59433b4c-kube-api-access-j69nq\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.625230 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/32bb2966-d412-43e4-978e-21dc59433b4c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:29 crc kubenswrapper[4746]: I1211 10:16:29.678224 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78c64bc9c5-wfn8b"] Dec 11 10:16:30 crc kubenswrapper[4746]: I1211 10:16:30.139747 4746 generic.go:334] "Generic (PLEG): container finished" podID="0a51bb6a-6ca0-4e2d-8427-70e92cd4730d" containerID="da76422ba0e53476cf30a044f3ba1a258b3a8d90d0b0896c62c2b9da9572bb09" exitCode=0 Dec 11 10:16:30 crc kubenswrapper[4746]: I1211 10:16:30.139876 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" event={"ID":"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d","Type":"ContainerDied","Data":"da76422ba0e53476cf30a044f3ba1a258b3a8d90d0b0896c62c2b9da9572bb09"} Dec 11 10:16:30 crc kubenswrapper[4746]: I1211 10:16:30.140347 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" event={"ID":"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d","Type":"ContainerStarted","Data":"a0c6434742165cde07e9487aca8cea35edc143ebf539e2040cf51fc02b902108"} Dec 11 10:16:30 crc kubenswrapper[4746]: I1211 10:16:30.145649 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" event={"ID":"32bb2966-d412-43e4-978e-21dc59433b4c","Type":"ContainerDied","Data":"fe45cf995604a98fad94ab4b4e34dc5d0ad6230c2b49261231dbdca1c922957e"} Dec 11 10:16:30 crc kubenswrapper[4746]: I1211 10:16:30.145745 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cd5cbd7b9-n7t48" Dec 11 10:16:30 crc kubenswrapper[4746]: I1211 10:16:30.145754 4746 scope.go:117] "RemoveContainer" containerID="5c7912a5988f292fb586e51c36eb144d833e6e4d61e3ba721601e58f7f8341a4" Dec 11 10:16:30 crc kubenswrapper[4746]: I1211 10:16:30.202113 4746 scope.go:117] "RemoveContainer" containerID="25de4006a56762c77186d2deeaa77a2ec106c881595169bffcc54840fb639d0e" Dec 11 10:16:30 crc kubenswrapper[4746]: I1211 10:16:30.214434 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-n7t48"] Dec 11 10:16:30 crc kubenswrapper[4746]: I1211 10:16:30.228301 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cd5cbd7b9-n7t48"] Dec 11 10:16:31 crc kubenswrapper[4746]: I1211 10:16:31.163938 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" event={"ID":"0a51bb6a-6ca0-4e2d-8427-70e92cd4730d","Type":"ContainerStarted","Data":"b166588b2047ace863841763cbbe40634f7e6fb113feb9e41e448e3fbc367a56"} Dec 11 10:16:31 crc kubenswrapper[4746]: I1211 10:16:31.166288 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:31 crc kubenswrapper[4746]: I1211 10:16:31.197000 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" podStartSLOduration=3.196972198 podStartE2EDuration="3.196972198s" podCreationTimestamp="2025-12-11 10:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:16:31.187456342 +0000 UTC m=+1364.047319675" watchObservedRunningTime="2025-12-11 10:16:31.196972198 +0000 UTC m=+1364.056835511" Dec 11 10:16:31 crc kubenswrapper[4746]: I1211 10:16:31.644856 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32bb2966-d412-43e4-978e-21dc59433b4c" path="/var/lib/kubelet/pods/32bb2966-d412-43e4-978e-21dc59433b4c/volumes" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.083419 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78c64bc9c5-wfn8b" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.167390 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-4c6n5"] Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.167666 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" podUID="36593729-76b5-4f4a-b617-4943c1d51902" containerName="dnsmasq-dns" containerID="cri-o://aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e" gracePeriod=10 Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.702748 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.769383 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-nb\") pod \"36593729-76b5-4f4a-b617-4943c1d51902\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.769563 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-openstack-edpm-ipam\") pod \"36593729-76b5-4f4a-b617-4943c1d51902\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.769646 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-svc\") pod \"36593729-76b5-4f4a-b617-4943c1d51902\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.769692 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfstn\" (UniqueName: \"kubernetes.io/projected/36593729-76b5-4f4a-b617-4943c1d51902-kube-api-access-hfstn\") pod \"36593729-76b5-4f4a-b617-4943c1d51902\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.769743 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-sb\") pod \"36593729-76b5-4f4a-b617-4943c1d51902\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.769871 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-swift-storage-0\") pod \"36593729-76b5-4f4a-b617-4943c1d51902\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.769954 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-config\") pod \"36593729-76b5-4f4a-b617-4943c1d51902\" (UID: \"36593729-76b5-4f4a-b617-4943c1d51902\") " Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.779319 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36593729-76b5-4f4a-b617-4943c1d51902-kube-api-access-hfstn" (OuterVolumeSpecName: "kube-api-access-hfstn") pod "36593729-76b5-4f4a-b617-4943c1d51902" (UID: "36593729-76b5-4f4a-b617-4943c1d51902"). InnerVolumeSpecName "kube-api-access-hfstn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.842984 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "36593729-76b5-4f4a-b617-4943c1d51902" (UID: "36593729-76b5-4f4a-b617-4943c1d51902"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.844840 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "36593729-76b5-4f4a-b617-4943c1d51902" (UID: "36593729-76b5-4f4a-b617-4943c1d51902"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.857195 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "36593729-76b5-4f4a-b617-4943c1d51902" (UID: "36593729-76b5-4f4a-b617-4943c1d51902"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.862966 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "36593729-76b5-4f4a-b617-4943c1d51902" (UID: "36593729-76b5-4f4a-b617-4943c1d51902"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.872871 4746 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.872913 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.872927 4746 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.872944 4746 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.872957 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hfstn\" (UniqueName: \"kubernetes.io/projected/36593729-76b5-4f4a-b617-4943c1d51902-kube-api-access-hfstn\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.877765 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "36593729-76b5-4f4a-b617-4943c1d51902" (UID: "36593729-76b5-4f4a-b617-4943c1d51902"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.878765 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-config" (OuterVolumeSpecName: "config") pod "36593729-76b5-4f4a-b617-4943c1d51902" (UID: "36593729-76b5-4f4a-b617-4943c1d51902"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.977602 4746 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:39 crc kubenswrapper[4746]: I1211 10:16:39.985549 4746 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36593729-76b5-4f4a-b617-4943c1d51902-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.279940 4746 generic.go:334] "Generic (PLEG): container finished" podID="36593729-76b5-4f4a-b617-4943c1d51902" containerID="aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e" exitCode=0 Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.280001 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" event={"ID":"36593729-76b5-4f4a-b617-4943c1d51902","Type":"ContainerDied","Data":"aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e"} Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.280040 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" event={"ID":"36593729-76b5-4f4a-b617-4943c1d51902","Type":"ContainerDied","Data":"2cb1e0218c1d7cbd1840cd9c7b259306bb378be1ff67098cd3cf2bc96094ef11"} Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.280089 4746 scope.go:117] "RemoveContainer" containerID="aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e" Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.280272 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-d558885bc-4c6n5" Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.354712 4746 scope.go:117] "RemoveContainer" containerID="a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29" Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.368338 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-4c6n5"] Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.382252 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-d558885bc-4c6n5"] Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.393624 4746 scope.go:117] "RemoveContainer" containerID="aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e" Dec 11 10:16:40 crc kubenswrapper[4746]: E1211 10:16:40.394331 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e\": container with ID starting with aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e not found: ID does not exist" containerID="aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e" Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.394377 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e"} err="failed to get container status \"aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e\": rpc error: code = NotFound desc = could not find container \"aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e\": container with ID starting with aab026ed36d8b875bff3dcd552a2ae9692fbf364b60b3e1b1b46bee8e4cd7d8e not found: ID does not exist" Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.394406 4746 scope.go:117] "RemoveContainer" containerID="a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29" Dec 11 10:16:40 crc kubenswrapper[4746]: E1211 10:16:40.394948 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29\": container with ID starting with a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29 not found: ID does not exist" containerID="a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29" Dec 11 10:16:40 crc kubenswrapper[4746]: I1211 10:16:40.395008 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29"} err="failed to get container status \"a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29\": rpc error: code = NotFound desc = could not find container \"a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29\": container with ID starting with a3e4bd26506ce67b9b1f97d74cf3ebb763bf989743ec4e2a516b53f9cc531d29 not found: ID does not exist" Dec 11 10:16:41 crc kubenswrapper[4746]: I1211 10:16:41.651564 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36593729-76b5-4f4a-b617-4943c1d51902" path="/var/lib/kubelet/pods/36593729-76b5-4f4a-b617-4943c1d51902/volumes" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.543628 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h"] Dec 11 10:16:55 crc kubenswrapper[4746]: E1211 10:16:55.545506 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36593729-76b5-4f4a-b617-4943c1d51902" containerName="dnsmasq-dns" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.545523 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="36593729-76b5-4f4a-b617-4943c1d51902" containerName="dnsmasq-dns" Dec 11 10:16:55 crc kubenswrapper[4746]: E1211 10:16:55.545562 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32bb2966-d412-43e4-978e-21dc59433b4c" containerName="dnsmasq-dns" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.545569 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bb2966-d412-43e4-978e-21dc59433b4c" containerName="dnsmasq-dns" Dec 11 10:16:55 crc kubenswrapper[4746]: E1211 10:16:55.545597 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36593729-76b5-4f4a-b617-4943c1d51902" containerName="init" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.545604 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="36593729-76b5-4f4a-b617-4943c1d51902" containerName="init" Dec 11 10:16:55 crc kubenswrapper[4746]: E1211 10:16:55.545625 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32bb2966-d412-43e4-978e-21dc59433b4c" containerName="init" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.545632 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="32bb2966-d412-43e4-978e-21dc59433b4c" containerName="init" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.546846 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="32bb2966-d412-43e4-978e-21dc59433b4c" containerName="dnsmasq-dns" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.546883 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="36593729-76b5-4f4a-b617-4943c1d51902" containerName="dnsmasq-dns" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.548205 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.572572 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.573812 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.574228 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.574484 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.609089 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h"] Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.621137 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.621256 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.621383 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.621432 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxgr8\" (UniqueName: \"kubernetes.io/projected/30f79518-b92a-4058-8834-45c45c284eee-kube-api-access-hxgr8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.722823 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxgr8\" (UniqueName: \"kubernetes.io/projected/30f79518-b92a-4058-8834-45c45c284eee-kube-api-access-hxgr8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.722931 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.724801 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.725956 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.732437 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.738194 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.741680 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.748154 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxgr8\" (UniqueName: \"kubernetes.io/projected/30f79518-b92a-4058-8834-45c45c284eee-kube-api-access-hxgr8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:55 crc kubenswrapper[4746]: I1211 10:16:55.898251 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:16:56 crc kubenswrapper[4746]: I1211 10:16:56.512006 4746 generic.go:334] "Generic (PLEG): container finished" podID="9c61eb65-bc9f-4b9f-84c8-286e25295809" containerID="4800418a3ced3f9869ed6ff07cff584036277b5038c1c13fa1d75ee998ce4741" exitCode=0 Dec 11 10:16:56 crc kubenswrapper[4746]: I1211 10:16:56.512283 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c61eb65-bc9f-4b9f-84c8-286e25295809","Type":"ContainerDied","Data":"4800418a3ced3f9869ed6ff07cff584036277b5038c1c13fa1d75ee998ce4741"} Dec 11 10:16:56 crc kubenswrapper[4746]: I1211 10:16:56.525634 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h"] Dec 11 10:16:57 crc kubenswrapper[4746]: I1211 10:16:57.537608 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9c61eb65-bc9f-4b9f-84c8-286e25295809","Type":"ContainerStarted","Data":"783c31c4bdc026c11f30c76cdf2c1ad6f2c115c12c45fdf850f007e5aa290227"} Dec 11 10:16:57 crc kubenswrapper[4746]: I1211 10:16:57.538564 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 11 10:16:57 crc kubenswrapper[4746]: I1211 10:16:57.543852 4746 generic.go:334] "Generic (PLEG): container finished" podID="e96da7ab-1e2a-4f9c-bb48-9955198a646a" containerID="e54c035a19ed46388db0d2f86dd9783ecdbb5851a6488c4a539b66e007890e26" exitCode=0 Dec 11 10:16:57 crc kubenswrapper[4746]: I1211 10:16:57.543926 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e96da7ab-1e2a-4f9c-bb48-9955198a646a","Type":"ContainerDied","Data":"e54c035a19ed46388db0d2f86dd9783ecdbb5851a6488c4a539b66e007890e26"} Dec 11 10:16:57 crc kubenswrapper[4746]: I1211 10:16:57.546006 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" event={"ID":"30f79518-b92a-4058-8834-45c45c284eee","Type":"ContainerStarted","Data":"d0034796cccbd00e7d9cd88487a98abbc9e430e843d50918189663c5f7a34471"} Dec 11 10:16:57 crc kubenswrapper[4746]: I1211 10:16:57.589426 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.589397507 podStartE2EDuration="38.589397507s" podCreationTimestamp="2025-12-11 10:16:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:16:57.571431615 +0000 UTC m=+1390.431294948" watchObservedRunningTime="2025-12-11 10:16:57.589397507 +0000 UTC m=+1390.449260820" Dec 11 10:16:58 crc kubenswrapper[4746]: I1211 10:16:58.578962 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e96da7ab-1e2a-4f9c-bb48-9955198a646a","Type":"ContainerStarted","Data":"8626d59a0edb8ceeb9f3ddbbc7d454b297391f721e2e27a45b4f8ebbb9d971b7"} Dec 11 10:16:58 crc kubenswrapper[4746]: I1211 10:16:58.580608 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:17:01 crc kubenswrapper[4746]: I1211 10:17:01.193226 4746 scope.go:117] "RemoveContainer" containerID="b2012f35a5122aa38fd256a3f8f8da68e7c2bf68069f2b62a93a5e85c00f5bdb" Dec 11 10:17:09 crc kubenswrapper[4746]: I1211 10:17:09.750251 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" event={"ID":"30f79518-b92a-4058-8834-45c45c284eee","Type":"ContainerStarted","Data":"0324ff07b9467c306d40d067a306abc24f0f148d829a8426ddc7de418d879712"} Dec 11 10:17:09 crc kubenswrapper[4746]: I1211 10:17:09.780424 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" podStartSLOduration=1.932296784 podStartE2EDuration="14.780387931s" podCreationTimestamp="2025-12-11 10:16:55 +0000 UTC" firstStartedPulling="2025-12-11 10:16:56.527247127 +0000 UTC m=+1389.387110440" lastFinishedPulling="2025-12-11 10:17:09.375338274 +0000 UTC m=+1402.235201587" observedRunningTime="2025-12-11 10:17:09.77478035 +0000 UTC m=+1402.634643663" watchObservedRunningTime="2025-12-11 10:17:09.780387931 +0000 UTC m=+1402.640251254" Dec 11 10:17:09 crc kubenswrapper[4746]: I1211 10:17:09.789041 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=48.789016633 podStartE2EDuration="48.789016633s" podCreationTimestamp="2025-12-11 10:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 10:16:58.645216217 +0000 UTC m=+1391.505079540" watchObservedRunningTime="2025-12-11 10:17:09.789016633 +0000 UTC m=+1402.648879966" Dec 11 10:17:10 crc kubenswrapper[4746]: I1211 10:17:10.328356 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 11 10:17:11 crc kubenswrapper[4746]: I1211 10:17:11.705315 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 11 10:17:22 crc kubenswrapper[4746]: I1211 10:17:22.984570 4746 generic.go:334] "Generic (PLEG): container finished" podID="30f79518-b92a-4058-8834-45c45c284eee" containerID="0324ff07b9467c306d40d067a306abc24f0f148d829a8426ddc7de418d879712" exitCode=0 Dec 11 10:17:22 crc kubenswrapper[4746]: I1211 10:17:22.984678 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" event={"ID":"30f79518-b92a-4058-8834-45c45c284eee","Type":"ContainerDied","Data":"0324ff07b9467c306d40d067a306abc24f0f148d829a8426ddc7de418d879712"} Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.483186 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.554872 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-repo-setup-combined-ca-bundle\") pod \"30f79518-b92a-4058-8834-45c45c284eee\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.554962 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-inventory\") pod \"30f79518-b92a-4058-8834-45c45c284eee\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.555002 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-ssh-key\") pod \"30f79518-b92a-4058-8834-45c45c284eee\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.555171 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxgr8\" (UniqueName: \"kubernetes.io/projected/30f79518-b92a-4058-8834-45c45c284eee-kube-api-access-hxgr8\") pod \"30f79518-b92a-4058-8834-45c45c284eee\" (UID: \"30f79518-b92a-4058-8834-45c45c284eee\") " Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.562438 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f79518-b92a-4058-8834-45c45c284eee-kube-api-access-hxgr8" (OuterVolumeSpecName: "kube-api-access-hxgr8") pod "30f79518-b92a-4058-8834-45c45c284eee" (UID: "30f79518-b92a-4058-8834-45c45c284eee"). InnerVolumeSpecName "kube-api-access-hxgr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.564193 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "30f79518-b92a-4058-8834-45c45c284eee" (UID: "30f79518-b92a-4058-8834-45c45c284eee"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.591812 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "30f79518-b92a-4058-8834-45c45c284eee" (UID: "30f79518-b92a-4058-8834-45c45c284eee"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.601083 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-inventory" (OuterVolumeSpecName: "inventory") pod "30f79518-b92a-4058-8834-45c45c284eee" (UID: "30f79518-b92a-4058-8834-45c45c284eee"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.657898 4746 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.657925 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.657935 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/30f79518-b92a-4058-8834-45c45c284eee-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:24 crc kubenswrapper[4746]: I1211 10:17:24.657944 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxgr8\" (UniqueName: \"kubernetes.io/projected/30f79518-b92a-4058-8834-45c45c284eee-kube-api-access-hxgr8\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.008633 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" event={"ID":"30f79518-b92a-4058-8834-45c45c284eee","Type":"ContainerDied","Data":"d0034796cccbd00e7d9cd88487a98abbc9e430e843d50918189663c5f7a34471"} Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.008697 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0034796cccbd00e7d9cd88487a98abbc9e430e843d50918189663c5f7a34471" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.008795 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.152073 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g"] Dec 11 10:17:25 crc kubenswrapper[4746]: E1211 10:17:25.152592 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30f79518-b92a-4058-8834-45c45c284eee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.152611 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="30f79518-b92a-4058-8834-45c45c284eee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.152924 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="30f79518-b92a-4058-8834-45c45c284eee" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.153711 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.157644 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.162341 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.164164 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.164302 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.171770 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g"] Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.271367 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9j64g\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.272670 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqzfk\" (UniqueName: \"kubernetes.io/projected/e9138713-a26f-45a2-8222-3bb43892a757-kube-api-access-jqzfk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9j64g\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.272836 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9j64g\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.375629 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9j64g\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.376403 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9j64g\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.376429 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqzfk\" (UniqueName: \"kubernetes.io/projected/e9138713-a26f-45a2-8222-3bb43892a757-kube-api-access-jqzfk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9j64g\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.380753 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9j64g\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.391076 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9j64g\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.393256 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqzfk\" (UniqueName: \"kubernetes.io/projected/e9138713-a26f-45a2-8222-3bb43892a757-kube-api-access-jqzfk\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-9j64g\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:25 crc kubenswrapper[4746]: I1211 10:17:25.477004 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:26 crc kubenswrapper[4746]: I1211 10:17:26.038547 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g"] Dec 11 10:17:27 crc kubenswrapper[4746]: I1211 10:17:27.047762 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" event={"ID":"e9138713-a26f-45a2-8222-3bb43892a757","Type":"ContainerStarted","Data":"ab461fd78e2cb8f8a8bce03df15348bd55bf8f20a368531a9c06adc80bf752dd"} Dec 11 10:17:27 crc kubenswrapper[4746]: I1211 10:17:27.049312 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" event={"ID":"e9138713-a26f-45a2-8222-3bb43892a757","Type":"ContainerStarted","Data":"e4bcd69929b41f89026f4fc2dd72a1bd92297ecabe640e3e009e1fe9e6e2889e"} Dec 11 10:17:27 crc kubenswrapper[4746]: I1211 10:17:27.081186 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" podStartSLOduration=1.510858834 podStartE2EDuration="2.081165367s" podCreationTimestamp="2025-12-11 10:17:25 +0000 UTC" firstStartedPulling="2025-12-11 10:17:26.045073087 +0000 UTC m=+1418.904936400" lastFinishedPulling="2025-12-11 10:17:26.61537962 +0000 UTC m=+1419.475242933" observedRunningTime="2025-12-11 10:17:27.071656551 +0000 UTC m=+1419.931519884" watchObservedRunningTime="2025-12-11 10:17:27.081165367 +0000 UTC m=+1419.941028680" Dec 11 10:17:29 crc kubenswrapper[4746]: I1211 10:17:29.877881 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:17:29 crc kubenswrapper[4746]: I1211 10:17:29.878581 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:17:30 crc kubenswrapper[4746]: I1211 10:17:30.080520 4746 generic.go:334] "Generic (PLEG): container finished" podID="e9138713-a26f-45a2-8222-3bb43892a757" containerID="ab461fd78e2cb8f8a8bce03df15348bd55bf8f20a368531a9c06adc80bf752dd" exitCode=0 Dec 11 10:17:30 crc kubenswrapper[4746]: I1211 10:17:30.080570 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" event={"ID":"e9138713-a26f-45a2-8222-3bb43892a757","Type":"ContainerDied","Data":"ab461fd78e2cb8f8a8bce03df15348bd55bf8f20a368531a9c06adc80bf752dd"} Dec 11 10:17:31 crc kubenswrapper[4746]: I1211 10:17:31.522467 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:31 crc kubenswrapper[4746]: I1211 10:17:31.616336 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-ssh-key\") pod \"e9138713-a26f-45a2-8222-3bb43892a757\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " Dec 11 10:17:31 crc kubenswrapper[4746]: I1211 10:17:31.616577 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-inventory\") pod \"e9138713-a26f-45a2-8222-3bb43892a757\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " Dec 11 10:17:31 crc kubenswrapper[4746]: I1211 10:17:31.616704 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqzfk\" (UniqueName: \"kubernetes.io/projected/e9138713-a26f-45a2-8222-3bb43892a757-kube-api-access-jqzfk\") pod \"e9138713-a26f-45a2-8222-3bb43892a757\" (UID: \"e9138713-a26f-45a2-8222-3bb43892a757\") " Dec 11 10:17:31 crc kubenswrapper[4746]: I1211 10:17:31.623943 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9138713-a26f-45a2-8222-3bb43892a757-kube-api-access-jqzfk" (OuterVolumeSpecName: "kube-api-access-jqzfk") pod "e9138713-a26f-45a2-8222-3bb43892a757" (UID: "e9138713-a26f-45a2-8222-3bb43892a757"). InnerVolumeSpecName "kube-api-access-jqzfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:17:31 crc kubenswrapper[4746]: I1211 10:17:31.655579 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e9138713-a26f-45a2-8222-3bb43892a757" (UID: "e9138713-a26f-45a2-8222-3bb43892a757"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:17:31 crc kubenswrapper[4746]: I1211 10:17:31.666315 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-inventory" (OuterVolumeSpecName: "inventory") pod "e9138713-a26f-45a2-8222-3bb43892a757" (UID: "e9138713-a26f-45a2-8222-3bb43892a757"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:17:31 crc kubenswrapper[4746]: I1211 10:17:31.719345 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:31 crc kubenswrapper[4746]: I1211 10:17:31.719419 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqzfk\" (UniqueName: \"kubernetes.io/projected/e9138713-a26f-45a2-8222-3bb43892a757-kube-api-access-jqzfk\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:31 crc kubenswrapper[4746]: I1211 10:17:31.719441 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e9138713-a26f-45a2-8222-3bb43892a757-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.100888 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" event={"ID":"e9138713-a26f-45a2-8222-3bb43892a757","Type":"ContainerDied","Data":"e4bcd69929b41f89026f4fc2dd72a1bd92297ecabe640e3e009e1fe9e6e2889e"} Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.100948 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4bcd69929b41f89026f4fc2dd72a1bd92297ecabe640e3e009e1fe9e6e2889e" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.100959 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-9j64g" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.192980 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj"] Dec 11 10:17:32 crc kubenswrapper[4746]: E1211 10:17:32.193381 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9138713-a26f-45a2-8222-3bb43892a757" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.193400 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9138713-a26f-45a2-8222-3bb43892a757" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.193613 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9138713-a26f-45a2-8222-3bb43892a757" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.194219 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.201629 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.201631 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.201858 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.202683 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.207616 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj"] Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.330271 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.330337 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.330417 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.330478 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhmqt\" (UniqueName: \"kubernetes.io/projected/3de7e541-4120-4c78-866b-9991eb4d1810-kube-api-access-qhmqt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.438216 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.438307 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.438449 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.438530 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhmqt\" (UniqueName: \"kubernetes.io/projected/3de7e541-4120-4c78-866b-9991eb4d1810-kube-api-access-qhmqt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.445691 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.451737 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.463407 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.468900 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhmqt\" (UniqueName: \"kubernetes.io/projected/3de7e541-4120-4c78-866b-9991eb4d1810-kube-api-access-qhmqt\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:32 crc kubenswrapper[4746]: I1211 10:17:32.522186 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:17:33 crc kubenswrapper[4746]: I1211 10:17:33.098751 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj"] Dec 11 10:17:34 crc kubenswrapper[4746]: I1211 10:17:34.121250 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" event={"ID":"3de7e541-4120-4c78-866b-9991eb4d1810","Type":"ContainerStarted","Data":"2c4b09d813b479944f12766a02a603b2d78edd26bb63646562537dc128ddc7e4"} Dec 11 10:17:34 crc kubenswrapper[4746]: I1211 10:17:34.121635 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" event={"ID":"3de7e541-4120-4c78-866b-9991eb4d1810","Type":"ContainerStarted","Data":"2cee356e9600c96e85556ffa5739cd17f15b74240bb94667bc48f04be0e55586"} Dec 11 10:17:34 crc kubenswrapper[4746]: I1211 10:17:34.145975 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" podStartSLOduration=1.623090396 podStartE2EDuration="2.145954645s" podCreationTimestamp="2025-12-11 10:17:32 +0000 UTC" firstStartedPulling="2025-12-11 10:17:33.110828781 +0000 UTC m=+1425.970692094" lastFinishedPulling="2025-12-11 10:17:33.63369303 +0000 UTC m=+1426.493556343" observedRunningTime="2025-12-11 10:17:34.145367679 +0000 UTC m=+1427.005230992" watchObservedRunningTime="2025-12-11 10:17:34.145954645 +0000 UTC m=+1427.005817958" Dec 11 10:17:59 crc kubenswrapper[4746]: I1211 10:17:59.877721 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:17:59 crc kubenswrapper[4746]: I1211 10:17:59.878507 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:18:01 crc kubenswrapper[4746]: I1211 10:18:01.358411 4746 scope.go:117] "RemoveContainer" containerID="f885efe0e4e9ccfd18d9dfb95dcb0fac3a0da25fadcaebbe269ec15c28383fc1" Dec 11 10:18:01 crc kubenswrapper[4746]: I1211 10:18:01.396501 4746 scope.go:117] "RemoveContainer" containerID="b8e5c1fd39f2676af136ce6be36675b0dfe3dd5cfef885438408fa384e27032e" Dec 11 10:18:01 crc kubenswrapper[4746]: I1211 10:18:01.454330 4746 scope.go:117] "RemoveContainer" containerID="f9dda2df02fe8fbb3c8748564c18ac246d96c03c35812c00f43554c7afeb7dc1" Dec 11 10:18:01 crc kubenswrapper[4746]: I1211 10:18:01.523089 4746 scope.go:117] "RemoveContainer" containerID="d942758c91e00bfc84c6304419c2eabb4dd2225b452525de33f3b153b11fb549" Dec 11 10:18:01 crc kubenswrapper[4746]: I1211 10:18:01.575808 4746 scope.go:117] "RemoveContainer" containerID="5f84a67f069fac5d08a89946a38929343f19acb2aae580250e324e5a35f213b9" Dec 11 10:18:29 crc kubenswrapper[4746]: I1211 10:18:29.877449 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:18:29 crc kubenswrapper[4746]: I1211 10:18:29.879222 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:18:29 crc kubenswrapper[4746]: I1211 10:18:29.879337 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:18:29 crc kubenswrapper[4746]: I1211 10:18:29.880519 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c4bc2bbb26d764668868d6659aa470877d6623d9d959c05982277a00cdacbca4"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:18:29 crc kubenswrapper[4746]: I1211 10:18:29.880588 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://c4bc2bbb26d764668868d6659aa470877d6623d9d959c05982277a00cdacbca4" gracePeriod=600 Dec 11 10:18:30 crc kubenswrapper[4746]: I1211 10:18:30.782555 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="c4bc2bbb26d764668868d6659aa470877d6623d9d959c05982277a00cdacbca4" exitCode=0 Dec 11 10:18:30 crc kubenswrapper[4746]: I1211 10:18:30.782652 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"c4bc2bbb26d764668868d6659aa470877d6623d9d959c05982277a00cdacbca4"} Dec 11 10:18:30 crc kubenswrapper[4746]: I1211 10:18:30.783397 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb"} Dec 11 10:18:30 crc kubenswrapper[4746]: I1211 10:18:30.783427 4746 scope.go:117] "RemoveContainer" containerID="c83e849437aeb890459f914e7f689680afdaaf0b057ea749f6e91f887067183f" Dec 11 10:19:01 crc kubenswrapper[4746]: I1211 10:19:01.836503 4746 scope.go:117] "RemoveContainer" containerID="eb638379b914e22cfce30de40cb2cf4ebe7022713c56d8faffa7195121e1bd27" Dec 11 10:19:01 crc kubenswrapper[4746]: I1211 10:19:01.865538 4746 scope.go:117] "RemoveContainer" containerID="657c81e092410dc723272734c2690973035af2198bfaa777cb1ec07ae10b0b95" Dec 11 10:19:01 crc kubenswrapper[4746]: I1211 10:19:01.887025 4746 scope.go:117] "RemoveContainer" containerID="34f9998be0753a53f6efd0fae94810c513ddf757f4f1c98a62b83175766c9f25" Dec 11 10:20:01 crc kubenswrapper[4746]: I1211 10:20:01.960815 4746 scope.go:117] "RemoveContainer" containerID="dc33edf149c87829f0a0b1703d520b40a5cc6d73a082a37be38ce8be64e64ed2" Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.720088 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-62zgj"] Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.724406 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.753654 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-62zgj"] Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.823323 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zlpl\" (UniqueName: \"kubernetes.io/projected/eb21215d-1e1e-41db-be78-4217b75decc9-kube-api-access-7zlpl\") pod \"community-operators-62zgj\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.823470 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-utilities\") pod \"community-operators-62zgj\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.823586 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-catalog-content\") pod \"community-operators-62zgj\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.925505 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zlpl\" (UniqueName: \"kubernetes.io/projected/eb21215d-1e1e-41db-be78-4217b75decc9-kube-api-access-7zlpl\") pod \"community-operators-62zgj\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.925644 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-utilities\") pod \"community-operators-62zgj\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.925780 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-catalog-content\") pod \"community-operators-62zgj\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.926312 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-utilities\") pod \"community-operators-62zgj\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.926342 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-catalog-content\") pod \"community-operators-62zgj\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:07 crc kubenswrapper[4746]: I1211 10:20:07.959529 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zlpl\" (UniqueName: \"kubernetes.io/projected/eb21215d-1e1e-41db-be78-4217b75decc9-kube-api-access-7zlpl\") pod \"community-operators-62zgj\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:08 crc kubenswrapper[4746]: I1211 10:20:08.053980 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:08 crc kubenswrapper[4746]: I1211 10:20:08.661182 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-62zgj"] Dec 11 10:20:09 crc kubenswrapper[4746]: I1211 10:20:09.093281 4746 generic.go:334] "Generic (PLEG): container finished" podID="eb21215d-1e1e-41db-be78-4217b75decc9" containerID="be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e" exitCode=0 Dec 11 10:20:09 crc kubenswrapper[4746]: I1211 10:20:09.093423 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62zgj" event={"ID":"eb21215d-1e1e-41db-be78-4217b75decc9","Type":"ContainerDied","Data":"be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e"} Dec 11 10:20:09 crc kubenswrapper[4746]: I1211 10:20:09.093669 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62zgj" event={"ID":"eb21215d-1e1e-41db-be78-4217b75decc9","Type":"ContainerStarted","Data":"2d86ed7c195f587d8a5ca57b4ecb6b5f58b9f4c7413f4c6f7b6915a340d72210"} Dec 11 10:20:09 crc kubenswrapper[4746]: I1211 10:20:09.095397 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:20:11 crc kubenswrapper[4746]: I1211 10:20:11.113655 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62zgj" event={"ID":"eb21215d-1e1e-41db-be78-4217b75decc9","Type":"ContainerStarted","Data":"e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f"} Dec 11 10:20:12 crc kubenswrapper[4746]: I1211 10:20:12.128235 4746 generic.go:334] "Generic (PLEG): container finished" podID="eb21215d-1e1e-41db-be78-4217b75decc9" containerID="e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f" exitCode=0 Dec 11 10:20:12 crc kubenswrapper[4746]: I1211 10:20:12.128345 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62zgj" event={"ID":"eb21215d-1e1e-41db-be78-4217b75decc9","Type":"ContainerDied","Data":"e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f"} Dec 11 10:20:13 crc kubenswrapper[4746]: I1211 10:20:13.144499 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62zgj" event={"ID":"eb21215d-1e1e-41db-be78-4217b75decc9","Type":"ContainerStarted","Data":"3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060"} Dec 11 10:20:13 crc kubenswrapper[4746]: I1211 10:20:13.171888 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-62zgj" podStartSLOduration=2.6410648930000002 podStartE2EDuration="6.171863561s" podCreationTimestamp="2025-12-11 10:20:07 +0000 UTC" firstStartedPulling="2025-12-11 10:20:09.095172851 +0000 UTC m=+1581.955036164" lastFinishedPulling="2025-12-11 10:20:12.625971509 +0000 UTC m=+1585.485834832" observedRunningTime="2025-12-11 10:20:13.163916836 +0000 UTC m=+1586.023780149" watchObservedRunningTime="2025-12-11 10:20:13.171863561 +0000 UTC m=+1586.031726874" Dec 11 10:20:18 crc kubenswrapper[4746]: I1211 10:20:18.057031 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:18 crc kubenswrapper[4746]: I1211 10:20:18.057946 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:18 crc kubenswrapper[4746]: I1211 10:20:18.120875 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:18 crc kubenswrapper[4746]: I1211 10:20:18.278440 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:18 crc kubenswrapper[4746]: I1211 10:20:18.363743 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-62zgj"] Dec 11 10:20:20 crc kubenswrapper[4746]: I1211 10:20:20.223905 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-62zgj" podUID="eb21215d-1e1e-41db-be78-4217b75decc9" containerName="registry-server" containerID="cri-o://3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060" gracePeriod=2 Dec 11 10:20:20 crc kubenswrapper[4746]: I1211 10:20:20.754175 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:20 crc kubenswrapper[4746]: I1211 10:20:20.919285 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-utilities\") pod \"eb21215d-1e1e-41db-be78-4217b75decc9\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " Dec 11 10:20:20 crc kubenswrapper[4746]: I1211 10:20:20.919494 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-catalog-content\") pod \"eb21215d-1e1e-41db-be78-4217b75decc9\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " Dec 11 10:20:20 crc kubenswrapper[4746]: I1211 10:20:20.919589 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zlpl\" (UniqueName: \"kubernetes.io/projected/eb21215d-1e1e-41db-be78-4217b75decc9-kube-api-access-7zlpl\") pod \"eb21215d-1e1e-41db-be78-4217b75decc9\" (UID: \"eb21215d-1e1e-41db-be78-4217b75decc9\") " Dec 11 10:20:20 crc kubenswrapper[4746]: I1211 10:20:20.920353 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-utilities" (OuterVolumeSpecName: "utilities") pod "eb21215d-1e1e-41db-be78-4217b75decc9" (UID: "eb21215d-1e1e-41db-be78-4217b75decc9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:20:20 crc kubenswrapper[4746]: I1211 10:20:20.920547 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:20:20 crc kubenswrapper[4746]: I1211 10:20:20.932395 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb21215d-1e1e-41db-be78-4217b75decc9-kube-api-access-7zlpl" (OuterVolumeSpecName: "kube-api-access-7zlpl") pod "eb21215d-1e1e-41db-be78-4217b75decc9" (UID: "eb21215d-1e1e-41db-be78-4217b75decc9"). InnerVolumeSpecName "kube-api-access-7zlpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:20:20 crc kubenswrapper[4746]: I1211 10:20:20.974635 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb21215d-1e1e-41db-be78-4217b75decc9" (UID: "eb21215d-1e1e-41db-be78-4217b75decc9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.023412 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb21215d-1e1e-41db-be78-4217b75decc9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.023465 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zlpl\" (UniqueName: \"kubernetes.io/projected/eb21215d-1e1e-41db-be78-4217b75decc9-kube-api-access-7zlpl\") on node \"crc\" DevicePath \"\"" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.237483 4746 generic.go:334] "Generic (PLEG): container finished" podID="eb21215d-1e1e-41db-be78-4217b75decc9" containerID="3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060" exitCode=0 Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.237576 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62zgj" event={"ID":"eb21215d-1e1e-41db-be78-4217b75decc9","Type":"ContainerDied","Data":"3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060"} Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.237622 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-62zgj" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.237650 4746 scope.go:117] "RemoveContainer" containerID="3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.237633 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-62zgj" event={"ID":"eb21215d-1e1e-41db-be78-4217b75decc9","Type":"ContainerDied","Data":"2d86ed7c195f587d8a5ca57b4ecb6b5f58b9f4c7413f4c6f7b6915a340d72210"} Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.267400 4746 scope.go:117] "RemoveContainer" containerID="e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.274690 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-62zgj"] Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.283535 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-62zgj"] Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.304407 4746 scope.go:117] "RemoveContainer" containerID="be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.361642 4746 scope.go:117] "RemoveContainer" containerID="3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060" Dec 11 10:20:21 crc kubenswrapper[4746]: E1211 10:20:21.362212 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060\": container with ID starting with 3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060 not found: ID does not exist" containerID="3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.362345 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060"} err="failed to get container status \"3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060\": rpc error: code = NotFound desc = could not find container \"3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060\": container with ID starting with 3d2ea6f306862e0dcfa9555b3e3a7ab049018d31f931d6f63161b820f1708060 not found: ID does not exist" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.362463 4746 scope.go:117] "RemoveContainer" containerID="e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f" Dec 11 10:20:21 crc kubenswrapper[4746]: E1211 10:20:21.362953 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f\": container with ID starting with e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f not found: ID does not exist" containerID="e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.363059 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f"} err="failed to get container status \"e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f\": rpc error: code = NotFound desc = could not find container \"e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f\": container with ID starting with e63365fa03e625a7012b1391b0b63591c950267a4295a1ef07ad6a2864d2362f not found: ID does not exist" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.363150 4746 scope.go:117] "RemoveContainer" containerID="be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e" Dec 11 10:20:21 crc kubenswrapper[4746]: E1211 10:20:21.363539 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e\": container with ID starting with be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e not found: ID does not exist" containerID="be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.363619 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e"} err="failed to get container status \"be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e\": rpc error: code = NotFound desc = could not find container \"be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e\": container with ID starting with be9d4aaad90f054e702a993abde62b52692b7fa1b44651243d4c64a8af5b158e not found: ID does not exist" Dec 11 10:20:21 crc kubenswrapper[4746]: I1211 10:20:21.642626 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb21215d-1e1e-41db-be78-4217b75decc9" path="/var/lib/kubelet/pods/eb21215d-1e1e-41db-be78-4217b75decc9/volumes" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.421663 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hcwv7"] Dec 11 10:20:31 crc kubenswrapper[4746]: E1211 10:20:31.422610 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb21215d-1e1e-41db-be78-4217b75decc9" containerName="extract-utilities" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.422623 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb21215d-1e1e-41db-be78-4217b75decc9" containerName="extract-utilities" Dec 11 10:20:31 crc kubenswrapper[4746]: E1211 10:20:31.422635 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb21215d-1e1e-41db-be78-4217b75decc9" containerName="registry-server" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.422641 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb21215d-1e1e-41db-be78-4217b75decc9" containerName="registry-server" Dec 11 10:20:31 crc kubenswrapper[4746]: E1211 10:20:31.422675 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb21215d-1e1e-41db-be78-4217b75decc9" containerName="extract-content" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.422682 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb21215d-1e1e-41db-be78-4217b75decc9" containerName="extract-content" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.422860 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb21215d-1e1e-41db-be78-4217b75decc9" containerName="registry-server" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.431582 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.457685 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcwv7"] Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.537457 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm5rv\" (UniqueName: \"kubernetes.io/projected/164dbbfa-f646-47f0-9de6-d5f466032c15-kube-api-access-fm5rv\") pod \"certified-operators-hcwv7\" (UID: \"164dbbfa-f646-47f0-9de6-d5f466032c15\") " pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.537646 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164dbbfa-f646-47f0-9de6-d5f466032c15-utilities\") pod \"certified-operators-hcwv7\" (UID: \"164dbbfa-f646-47f0-9de6-d5f466032c15\") " pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.538393 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164dbbfa-f646-47f0-9de6-d5f466032c15-catalog-content\") pod \"certified-operators-hcwv7\" (UID: \"164dbbfa-f646-47f0-9de6-d5f466032c15\") " pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.640220 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm5rv\" (UniqueName: \"kubernetes.io/projected/164dbbfa-f646-47f0-9de6-d5f466032c15-kube-api-access-fm5rv\") pod \"certified-operators-hcwv7\" (UID: \"164dbbfa-f646-47f0-9de6-d5f466032c15\") " pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.640297 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164dbbfa-f646-47f0-9de6-d5f466032c15-utilities\") pod \"certified-operators-hcwv7\" (UID: \"164dbbfa-f646-47f0-9de6-d5f466032c15\") " pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.640375 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164dbbfa-f646-47f0-9de6-d5f466032c15-catalog-content\") pod \"certified-operators-hcwv7\" (UID: \"164dbbfa-f646-47f0-9de6-d5f466032c15\") " pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.640833 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/164dbbfa-f646-47f0-9de6-d5f466032c15-catalog-content\") pod \"certified-operators-hcwv7\" (UID: \"164dbbfa-f646-47f0-9de6-d5f466032c15\") " pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.640830 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/164dbbfa-f646-47f0-9de6-d5f466032c15-utilities\") pod \"certified-operators-hcwv7\" (UID: \"164dbbfa-f646-47f0-9de6-d5f466032c15\") " pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.665024 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm5rv\" (UniqueName: \"kubernetes.io/projected/164dbbfa-f646-47f0-9de6-d5f466032c15-kube-api-access-fm5rv\") pod \"certified-operators-hcwv7\" (UID: \"164dbbfa-f646-47f0-9de6-d5f466032c15\") " pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:31 crc kubenswrapper[4746]: I1211 10:20:31.762290 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:32 crc kubenswrapper[4746]: I1211 10:20:32.358896 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcwv7"] Dec 11 10:20:33 crc kubenswrapper[4746]: I1211 10:20:33.370358 4746 generic.go:334] "Generic (PLEG): container finished" podID="164dbbfa-f646-47f0-9de6-d5f466032c15" containerID="e89fc2b2812c24e4f62101f14c3911a77b8c05cd33e93344b935ee6665e1b48b" exitCode=0 Dec 11 10:20:33 crc kubenswrapper[4746]: I1211 10:20:33.370472 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcwv7" event={"ID":"164dbbfa-f646-47f0-9de6-d5f466032c15","Type":"ContainerDied","Data":"e89fc2b2812c24e4f62101f14c3911a77b8c05cd33e93344b935ee6665e1b48b"} Dec 11 10:20:33 crc kubenswrapper[4746]: I1211 10:20:33.370905 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcwv7" event={"ID":"164dbbfa-f646-47f0-9de6-d5f466032c15","Type":"ContainerStarted","Data":"62d9f76e889049cc526bc989a2e00953aeb1aec7e571809c532349bd086f9ac6"} Dec 11 10:20:38 crc kubenswrapper[4746]: I1211 10:20:38.428398 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcwv7" event={"ID":"164dbbfa-f646-47f0-9de6-d5f466032c15","Type":"ContainerStarted","Data":"016a756c3cb080e4d750e09de2b37d054e416755d3a0ec5b777d1b491a164640"} Dec 11 10:20:39 crc kubenswrapper[4746]: I1211 10:20:39.444482 4746 generic.go:334] "Generic (PLEG): container finished" podID="164dbbfa-f646-47f0-9de6-d5f466032c15" containerID="016a756c3cb080e4d750e09de2b37d054e416755d3a0ec5b777d1b491a164640" exitCode=0 Dec 11 10:20:39 crc kubenswrapper[4746]: I1211 10:20:39.444533 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcwv7" event={"ID":"164dbbfa-f646-47f0-9de6-d5f466032c15","Type":"ContainerDied","Data":"016a756c3cb080e4d750e09de2b37d054e416755d3a0ec5b777d1b491a164640"} Dec 11 10:20:41 crc kubenswrapper[4746]: I1211 10:20:41.473041 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hcwv7" event={"ID":"164dbbfa-f646-47f0-9de6-d5f466032c15","Type":"ContainerStarted","Data":"baec98ece1a0d3a67407b0ab2b61513e6581fb32c5df4468149542e18bf6040c"} Dec 11 10:20:41 crc kubenswrapper[4746]: I1211 10:20:41.496136 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hcwv7" podStartSLOduration=2.9649866339999997 podStartE2EDuration="10.496108236s" podCreationTimestamp="2025-12-11 10:20:31 +0000 UTC" firstStartedPulling="2025-12-11 10:20:33.377843961 +0000 UTC m=+1606.237707274" lastFinishedPulling="2025-12-11 10:20:40.908965563 +0000 UTC m=+1613.768828876" observedRunningTime="2025-12-11 10:20:41.491242005 +0000 UTC m=+1614.351105328" watchObservedRunningTime="2025-12-11 10:20:41.496108236 +0000 UTC m=+1614.355971549" Dec 11 10:20:41 crc kubenswrapper[4746]: I1211 10:20:41.762717 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:41 crc kubenswrapper[4746]: I1211 10:20:41.763302 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:42 crc kubenswrapper[4746]: I1211 10:20:42.811690 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-hcwv7" podUID="164dbbfa-f646-47f0-9de6-d5f466032c15" containerName="registry-server" probeResult="failure" output=< Dec 11 10:20:42 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Dec 11 10:20:42 crc kubenswrapper[4746]: > Dec 11 10:20:51 crc kubenswrapper[4746]: I1211 10:20:51.887535 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:51 crc kubenswrapper[4746]: I1211 10:20:51.940331 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hcwv7" Dec 11 10:20:52 crc kubenswrapper[4746]: I1211 10:20:52.213320 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hcwv7"] Dec 11 10:20:52 crc kubenswrapper[4746]: I1211 10:20:52.381437 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pwjvd"] Dec 11 10:20:52 crc kubenswrapper[4746]: I1211 10:20:52.381712 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pwjvd" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" containerName="registry-server" containerID="cri-o://57615ae741ee918ab363e11afeb2fdbe055a09950702b1201dff8d781e461ecf" gracePeriod=2 Dec 11 10:20:52 crc kubenswrapper[4746]: I1211 10:20:52.590204 4746 generic.go:334] "Generic (PLEG): container finished" podID="e2d46174-be57-41e7-9363-896cc0a860c6" containerID="57615ae741ee918ab363e11afeb2fdbe055a09950702b1201dff8d781e461ecf" exitCode=0 Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:52.590306 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwjvd" event={"ID":"e2d46174-be57-41e7-9363-896cc0a860c6","Type":"ContainerDied","Data":"57615ae741ee918ab363e11afeb2fdbe055a09950702b1201dff8d781e461ecf"} Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:52.925716 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.083029 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-st5zv\" (UniqueName: \"kubernetes.io/projected/e2d46174-be57-41e7-9363-896cc0a860c6-kube-api-access-st5zv\") pod \"e2d46174-be57-41e7-9363-896cc0a860c6\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.083193 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-utilities\") pod \"e2d46174-be57-41e7-9363-896cc0a860c6\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.083376 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-catalog-content\") pod \"e2d46174-be57-41e7-9363-896cc0a860c6\" (UID: \"e2d46174-be57-41e7-9363-896cc0a860c6\") " Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.083972 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-utilities" (OuterVolumeSpecName: "utilities") pod "e2d46174-be57-41e7-9363-896cc0a860c6" (UID: "e2d46174-be57-41e7-9363-896cc0a860c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.096312 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d46174-be57-41e7-9363-896cc0a860c6-kube-api-access-st5zv" (OuterVolumeSpecName: "kube-api-access-st5zv") pod "e2d46174-be57-41e7-9363-896cc0a860c6" (UID: "e2d46174-be57-41e7-9363-896cc0a860c6"). InnerVolumeSpecName "kube-api-access-st5zv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.149938 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2d46174-be57-41e7-9363-896cc0a860c6" (UID: "e2d46174-be57-41e7-9363-896cc0a860c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.185877 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.185918 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-st5zv\" (UniqueName: \"kubernetes.io/projected/e2d46174-be57-41e7-9363-896cc0a860c6-kube-api-access-st5zv\") on node \"crc\" DevicePath \"\"" Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.185934 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2d46174-be57-41e7-9363-896cc0a860c6-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.609538 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pwjvd" event={"ID":"e2d46174-be57-41e7-9363-896cc0a860c6","Type":"ContainerDied","Data":"4a3c3c1ddcad130fde5b2d285794243bb79c7e62e37f15035b32815c2bf7da16"} Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.609997 4746 scope.go:117] "RemoveContainer" containerID="57615ae741ee918ab363e11afeb2fdbe055a09950702b1201dff8d781e461ecf" Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.609613 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pwjvd" Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.643722 4746 scope.go:117] "RemoveContainer" containerID="cdda24f4ac99292532b48682f5dff5215972edb6a17d16233b224c9e0225b625" Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.662719 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pwjvd"] Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.671733 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pwjvd"] Dec 11 10:20:53 crc kubenswrapper[4746]: I1211 10:20:53.687527 4746 scope.go:117] "RemoveContainer" containerID="72e141d9e68cbabd825a7615d334b4e9d578851585d860c3103683ff3df8b75b" Dec 11 10:20:55 crc kubenswrapper[4746]: I1211 10:20:55.645871 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" path="/var/lib/kubelet/pods/e2d46174-be57-41e7-9363-896cc0a860c6/volumes" Dec 11 10:20:59 crc kubenswrapper[4746]: I1211 10:20:59.877553 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:20:59 crc kubenswrapper[4746]: I1211 10:20:59.878159 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:21:02 crc kubenswrapper[4746]: I1211 10:21:02.028597 4746 scope.go:117] "RemoveContainer" containerID="37b8d35e70250950d9bbe0b5165c5f060ca3923c3b6a2a4227417c85cd8188ea" Dec 11 10:21:02 crc kubenswrapper[4746]: I1211 10:21:02.056871 4746 scope.go:117] "RemoveContainer" containerID="f612d6c58f9a2e937bb8175f4eb4b30f92f48ec22494ca3a2a721d0995240d37" Dec 11 10:21:02 crc kubenswrapper[4746]: I1211 10:21:02.081771 4746 scope.go:117] "RemoveContainer" containerID="d432af5db78935cd70aa39cd729eef869d509f0d91565a035720ff344b1f8881" Dec 11 10:21:02 crc kubenswrapper[4746]: I1211 10:21:02.107717 4746 scope.go:117] "RemoveContainer" containerID="aa62ee80f757f09799759ee272c8eb7e44293fb3646eefd6c716d24921b4bb3e" Dec 11 10:21:10 crc kubenswrapper[4746]: I1211 10:21:10.050653 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-bkqql"] Dec 11 10:21:10 crc kubenswrapper[4746]: I1211 10:21:10.062023 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-bkqql"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.093514 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jx9sp"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.103157 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-1146-account-create-update-8p79d"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.114507 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jx9sp"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.131874 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-3fbf-account-create-update-njwlh"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.146433 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-1146-account-create-update-8p79d"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.158904 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-3fbf-account-create-update-njwlh"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.170377 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-jz62q"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.185392 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-d52e-account-create-update-d8ngl"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.197922 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-jz62q"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.209140 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-d52e-account-create-update-d8ngl"] Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.647076 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b20489d-e037-4fdf-83e6-aeb450aff0f8" path="/var/lib/kubelet/pods/0b20489d-e037-4fdf-83e6-aeb450aff0f8/volumes" Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.648524 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d59e901-be21-486f-8b33-ea5b9b2a60a7" path="/var/lib/kubelet/pods/1d59e901-be21-486f-8b33-ea5b9b2a60a7/volumes" Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.649391 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e" path="/var/lib/kubelet/pods/2a53dc1b-dcc1-4d3e-8d5b-960b3248c63e/volumes" Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.650222 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d780f1f-9761-4b5a-a706-18d23110b336" path="/var/lib/kubelet/pods/6d780f1f-9761-4b5a-a706-18d23110b336/volumes" Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.651949 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9fe9f17-da32-4064-b8fe-c2ae0c5107b0" path="/var/lib/kubelet/pods/a9fe9f17-da32-4064-b8fe-c2ae0c5107b0/volumes" Dec 11 10:21:11 crc kubenswrapper[4746]: I1211 10:21:11.652830 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b3935c-dc1b-4bfd-95d3-de24c6139ae9" path="/var/lib/kubelet/pods/d6b3935c-dc1b-4bfd-95d3-de24c6139ae9/volumes" Dec 11 10:21:24 crc kubenswrapper[4746]: I1211 10:21:24.104293 4746 generic.go:334] "Generic (PLEG): container finished" podID="3de7e541-4120-4c78-866b-9991eb4d1810" containerID="2c4b09d813b479944f12766a02a603b2d78edd26bb63646562537dc128ddc7e4" exitCode=0 Dec 11 10:21:24 crc kubenswrapper[4746]: I1211 10:21:24.105297 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" event={"ID":"3de7e541-4120-4c78-866b-9991eb4d1810","Type":"ContainerDied","Data":"2c4b09d813b479944f12766a02a603b2d78edd26bb63646562537dc128ddc7e4"} Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.623104 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.811879 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-inventory\") pod \"3de7e541-4120-4c78-866b-9991eb4d1810\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.812189 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-bootstrap-combined-ca-bundle\") pod \"3de7e541-4120-4c78-866b-9991eb4d1810\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.812389 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhmqt\" (UniqueName: \"kubernetes.io/projected/3de7e541-4120-4c78-866b-9991eb4d1810-kube-api-access-qhmqt\") pod \"3de7e541-4120-4c78-866b-9991eb4d1810\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.812492 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-ssh-key\") pod \"3de7e541-4120-4c78-866b-9991eb4d1810\" (UID: \"3de7e541-4120-4c78-866b-9991eb4d1810\") " Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.820289 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3de7e541-4120-4c78-866b-9991eb4d1810" (UID: "3de7e541-4120-4c78-866b-9991eb4d1810"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.823442 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de7e541-4120-4c78-866b-9991eb4d1810-kube-api-access-qhmqt" (OuterVolumeSpecName: "kube-api-access-qhmqt") pod "3de7e541-4120-4c78-866b-9991eb4d1810" (UID: "3de7e541-4120-4c78-866b-9991eb4d1810"). InnerVolumeSpecName "kube-api-access-qhmqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.848345 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3de7e541-4120-4c78-866b-9991eb4d1810" (UID: "3de7e541-4120-4c78-866b-9991eb4d1810"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.849863 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-inventory" (OuterVolumeSpecName: "inventory") pod "3de7e541-4120-4c78-866b-9991eb4d1810" (UID: "3de7e541-4120-4c78-866b-9991eb4d1810"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.917941 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.917997 4746 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.918016 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhmqt\" (UniqueName: \"kubernetes.io/projected/3de7e541-4120-4c78-866b-9991eb4d1810-kube-api-access-qhmqt\") on node \"crc\" DevicePath \"\"" Dec 11 10:21:25 crc kubenswrapper[4746]: I1211 10:21:25.918030 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3de7e541-4120-4c78-866b-9991eb4d1810-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.128224 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" event={"ID":"3de7e541-4120-4c78-866b-9991eb4d1810","Type":"ContainerDied","Data":"2cee356e9600c96e85556ffa5739cd17f15b74240bb94667bc48f04be0e55586"} Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.128289 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cee356e9600c96e85556ffa5739cd17f15b74240bb94667bc48f04be0e55586" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.128387 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.237599 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm"] Dec 11 10:21:26 crc kubenswrapper[4746]: E1211 10:21:26.238472 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" containerName="extract-content" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.238542 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" containerName="extract-content" Dec 11 10:21:26 crc kubenswrapper[4746]: E1211 10:21:26.238701 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de7e541-4120-4c78-866b-9991eb4d1810" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.238754 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de7e541-4120-4c78-866b-9991eb4d1810" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 11 10:21:26 crc kubenswrapper[4746]: E1211 10:21:26.238801 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" containerName="extract-utilities" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.238847 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" containerName="extract-utilities" Dec 11 10:21:26 crc kubenswrapper[4746]: E1211 10:21:26.238920 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" containerName="registry-server" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.238969 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" containerName="registry-server" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.239329 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de7e541-4120-4c78-866b-9991eb4d1810" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.239414 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d46174-be57-41e7-9363-896cc0a860c6" containerName="registry-server" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.240348 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.245240 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.245732 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.245837 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.246091 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.253684 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm"] Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.430226 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.430370 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xp4h\" (UniqueName: \"kubernetes.io/projected/87d58bd7-f602-4d6b-b16c-1178233ebe3f-kube-api-access-9xp4h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.430627 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.532188 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.532324 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xp4h\" (UniqueName: \"kubernetes.io/projected/87d58bd7-f602-4d6b-b16c-1178233ebe3f-kube-api-access-9xp4h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.532400 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.538301 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.538683 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.555922 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xp4h\" (UniqueName: \"kubernetes.io/projected/87d58bd7-f602-4d6b-b16c-1178233ebe3f-kube-api-access-9xp4h\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.566960 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:21:26 crc kubenswrapper[4746]: I1211 10:21:26.979706 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm"] Dec 11 10:21:27 crc kubenswrapper[4746]: I1211 10:21:27.143575 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" event={"ID":"87d58bd7-f602-4d6b-b16c-1178233ebe3f","Type":"ContainerStarted","Data":"54ea539bf92f2f8ceabfdd33e7c1c0243bfe22aed9ff23f317b6f17721757313"} Dec 11 10:21:28 crc kubenswrapper[4746]: I1211 10:21:28.155041 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" event={"ID":"87d58bd7-f602-4d6b-b16c-1178233ebe3f","Type":"ContainerStarted","Data":"0cf4e97c173c53f1baa08b05d5a3e046f0b1f87b4988ba5e7726cad6105bda4c"} Dec 11 10:21:28 crc kubenswrapper[4746]: I1211 10:21:28.168806 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" podStartSLOduration=1.328394092 podStartE2EDuration="2.168779211s" podCreationTimestamp="2025-12-11 10:21:26 +0000 UTC" firstStartedPulling="2025-12-11 10:21:27.000469636 +0000 UTC m=+1659.860332959" lastFinishedPulling="2025-12-11 10:21:27.840854765 +0000 UTC m=+1660.700718078" observedRunningTime="2025-12-11 10:21:28.167942258 +0000 UTC m=+1661.027805571" watchObservedRunningTime="2025-12-11 10:21:28.168779211 +0000 UTC m=+1661.028642524" Dec 11 10:21:29 crc kubenswrapper[4746]: I1211 10:21:29.877884 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:21:29 crc kubenswrapper[4746]: I1211 10:21:29.880034 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.089150 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4451-account-create-update-tld6b"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.112095 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2b5xl"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.129684 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-3829-account-create-update-gkgnn"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.148256 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4451-account-create-update-tld6b"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.159744 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-3829-account-create-update-gkgnn"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.170721 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-9a83-account-create-update-5s7gx"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.181917 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2b5xl"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.198179 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-c6862"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.207372 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-crmlt"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.216883 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-c6862"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.231988 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-crmlt"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.246182 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-9a83-account-create-update-5s7gx"] Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.643831 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17bccce7-01c4-4456-a26b-c01374a263b5" path="/var/lib/kubelet/pods/17bccce7-01c4-4456-a26b-c01374a263b5/volumes" Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.645310 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ada9f489-ce1d-4251-a6e3-cfa7f322d9f0" path="/var/lib/kubelet/pods/ada9f489-ce1d-4251-a6e3-cfa7f322d9f0/volumes" Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.646067 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820" path="/var/lib/kubelet/pods/ceb6d8cf-2891-4f5e-85e1-af1cc4dd2820/volumes" Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.646763 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de549276-34a9-48bd-8635-a46910019250" path="/var/lib/kubelet/pods/de549276-34a9-48bd-8635-a46910019250/volumes" Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.648031 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61f0916-b247-44ba-bf5c-fcd4e00ecd88" path="/var/lib/kubelet/pods/e61f0916-b247-44ba-bf5c-fcd4e00ecd88/volumes" Dec 11 10:21:37 crc kubenswrapper[4746]: I1211 10:21:37.648684 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb5a612d-f335-4dfc-912b-30247387c806" path="/var/lib/kubelet/pods/fb5a612d-f335-4dfc-912b-30247387c806/volumes" Dec 11 10:21:46 crc kubenswrapper[4746]: I1211 10:21:46.035542 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-2bptw"] Dec 11 10:21:46 crc kubenswrapper[4746]: I1211 10:21:46.048409 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-2bptw"] Dec 11 10:21:47 crc kubenswrapper[4746]: I1211 10:21:47.648478 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="480c95bc-8a38-4304-af6c-3118a7571459" path="/var/lib/kubelet/pods/480c95bc-8a38-4304-af6c-3118a7571459/volumes" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.465332 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wjlcf"] Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.469218 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.478246 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjlcf"] Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.511820 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-catalog-content\") pod \"redhat-marketplace-wjlcf\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.512080 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvlc7\" (UniqueName: \"kubernetes.io/projected/080e7e44-3898-4cec-a490-2cf6f2fa3021-kube-api-access-rvlc7\") pod \"redhat-marketplace-wjlcf\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.512221 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-utilities\") pod \"redhat-marketplace-wjlcf\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.614863 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-catalog-content\") pod \"redhat-marketplace-wjlcf\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.615229 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvlc7\" (UniqueName: \"kubernetes.io/projected/080e7e44-3898-4cec-a490-2cf6f2fa3021-kube-api-access-rvlc7\") pod \"redhat-marketplace-wjlcf\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.615270 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-utilities\") pod \"redhat-marketplace-wjlcf\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.615726 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-catalog-content\") pod \"redhat-marketplace-wjlcf\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.615799 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-utilities\") pod \"redhat-marketplace-wjlcf\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.651810 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvlc7\" (UniqueName: \"kubernetes.io/projected/080e7e44-3898-4cec-a490-2cf6f2fa3021-kube-api-access-rvlc7\") pod \"redhat-marketplace-wjlcf\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:58 crc kubenswrapper[4746]: I1211 10:21:58.797692 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:21:59 crc kubenswrapper[4746]: I1211 10:21:59.317560 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjlcf"] Dec 11 10:21:59 crc kubenswrapper[4746]: I1211 10:21:59.492846 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjlcf" event={"ID":"080e7e44-3898-4cec-a490-2cf6f2fa3021","Type":"ContainerStarted","Data":"f581650e94931413e40b49fd07a6318abba60681819a38c41981205f4a856036"} Dec 11 10:21:59 crc kubenswrapper[4746]: I1211 10:21:59.878381 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:21:59 crc kubenswrapper[4746]: I1211 10:21:59.878676 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:21:59 crc kubenswrapper[4746]: I1211 10:21:59.878737 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:21:59 crc kubenswrapper[4746]: I1211 10:21:59.879698 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:21:59 crc kubenswrapper[4746]: I1211 10:21:59.879759 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" gracePeriod=600 Dec 11 10:22:00 crc kubenswrapper[4746]: E1211 10:22:00.008662 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:22:00 crc kubenswrapper[4746]: I1211 10:22:00.504918 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" exitCode=0 Dec 11 10:22:00 crc kubenswrapper[4746]: I1211 10:22:00.505021 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb"} Dec 11 10:22:00 crc kubenswrapper[4746]: I1211 10:22:00.505104 4746 scope.go:117] "RemoveContainer" containerID="c4bc2bbb26d764668868d6659aa470877d6623d9d959c05982277a00cdacbca4" Dec 11 10:22:00 crc kubenswrapper[4746]: I1211 10:22:00.505885 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:22:00 crc kubenswrapper[4746]: E1211 10:22:00.506186 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:22:00 crc kubenswrapper[4746]: I1211 10:22:00.508747 4746 generic.go:334] "Generic (PLEG): container finished" podID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerID="48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09" exitCode=0 Dec 11 10:22:00 crc kubenswrapper[4746]: I1211 10:22:00.508793 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjlcf" event={"ID":"080e7e44-3898-4cec-a490-2cf6f2fa3021","Type":"ContainerDied","Data":"48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09"} Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.194891 4746 scope.go:117] "RemoveContainer" containerID="0b05f61810e28bb282fbacb0d00551ff3b6bb226eaa6d00f6a20748aaf1d382b" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.220774 4746 scope.go:117] "RemoveContainer" containerID="a75e2635229c480fd856683d0b9651efb117cc90f7da069c267062aec45034e4" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.271275 4746 scope.go:117] "RemoveContainer" containerID="81957127c383d6bb9a4f83f61040800ef86421c667caf2adae97b2747eb08bcd" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.310625 4746 scope.go:117] "RemoveContainer" containerID="08f26fd601ce04a319b4451e269e57884f7cb5097c43e042458fa8cd28fb3917" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.378900 4746 scope.go:117] "RemoveContainer" containerID="2d49886d6c26cd542a6af8155ca894d9926d4b6fd88b774b34f185235c440376" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.423901 4746 scope.go:117] "RemoveContainer" containerID="10c4b969993eb929646967e59f54a631e907bdb73d553f64713714bff1a2507a" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.488657 4746 scope.go:117] "RemoveContainer" containerID="0bca09d3c902bbb1fddaebd16a860b09bb6e83ebe0b97d19ce291254d7bb56e9" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.574845 4746 generic.go:334] "Generic (PLEG): container finished" podID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerID="2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae" exitCode=0 Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.574912 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjlcf" event={"ID":"080e7e44-3898-4cec-a490-2cf6f2fa3021","Type":"ContainerDied","Data":"2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae"} Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.581694 4746 scope.go:117] "RemoveContainer" containerID="d3499e475a5c574f468ec2ef7739e33c7202308e5b4718b4c21ff1fdaa27219a" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.641401 4746 scope.go:117] "RemoveContainer" containerID="d4c09a6f8c51c3c63d8aa4c59c4bf7bf5a527c45f91d71af9babaf64fb738c80" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.695509 4746 scope.go:117] "RemoveContainer" containerID="6d9bd831cfb0a1e3c1dd8758c55fdbd478b7b9562dafa9aafbb589a28f48f569" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.735654 4746 scope.go:117] "RemoveContainer" containerID="25d3f9973d044327d2f3d3592206e84765edc38b80c461214b1d6b49256d8209" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.762709 4746 scope.go:117] "RemoveContainer" containerID="2d2100e23f49ffdb0eb2fafb20079db05aa49df66d631b7c3403f1a8127f7f59" Dec 11 10:22:02 crc kubenswrapper[4746]: I1211 10:22:02.783369 4746 scope.go:117] "RemoveContainer" containerID="0c3014c14de4958b5bea89e84f51b7bdd53e8717a96c2022ed489a9a12237335" Dec 11 10:22:04 crc kubenswrapper[4746]: I1211 10:22:04.676583 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjlcf" event={"ID":"080e7e44-3898-4cec-a490-2cf6f2fa3021","Type":"ContainerStarted","Data":"bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6"} Dec 11 10:22:04 crc kubenswrapper[4746]: I1211 10:22:04.714106 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wjlcf" podStartSLOduration=3.453281627 podStartE2EDuration="6.714066518s" podCreationTimestamp="2025-12-11 10:21:58 +0000 UTC" firstStartedPulling="2025-12-11 10:22:00.510919444 +0000 UTC m=+1693.370782757" lastFinishedPulling="2025-12-11 10:22:03.771704335 +0000 UTC m=+1696.631567648" observedRunningTime="2025-12-11 10:22:04.695770086 +0000 UTC m=+1697.555633399" watchObservedRunningTime="2025-12-11 10:22:04.714066518 +0000 UTC m=+1697.573929841" Dec 11 10:22:08 crc kubenswrapper[4746]: I1211 10:22:08.798640 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:22:08 crc kubenswrapper[4746]: I1211 10:22:08.799238 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:22:08 crc kubenswrapper[4746]: I1211 10:22:08.847176 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:22:09 crc kubenswrapper[4746]: I1211 10:22:09.773662 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:22:09 crc kubenswrapper[4746]: I1211 10:22:09.830368 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjlcf"] Dec 11 10:22:10 crc kubenswrapper[4746]: I1211 10:22:10.631505 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:22:10 crc kubenswrapper[4746]: E1211 10:22:10.632236 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:22:11 crc kubenswrapper[4746]: I1211 10:22:11.735395 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wjlcf" podUID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerName="registry-server" containerID="cri-o://bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6" gracePeriod=2 Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.194534 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.315943 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-catalog-content\") pod \"080e7e44-3898-4cec-a490-2cf6f2fa3021\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.316268 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-utilities\") pod \"080e7e44-3898-4cec-a490-2cf6f2fa3021\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.316367 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvlc7\" (UniqueName: \"kubernetes.io/projected/080e7e44-3898-4cec-a490-2cf6f2fa3021-kube-api-access-rvlc7\") pod \"080e7e44-3898-4cec-a490-2cf6f2fa3021\" (UID: \"080e7e44-3898-4cec-a490-2cf6f2fa3021\") " Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.317167 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-utilities" (OuterVolumeSpecName: "utilities") pod "080e7e44-3898-4cec-a490-2cf6f2fa3021" (UID: "080e7e44-3898-4cec-a490-2cf6f2fa3021"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.322153 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080e7e44-3898-4cec-a490-2cf6f2fa3021-kube-api-access-rvlc7" (OuterVolumeSpecName: "kube-api-access-rvlc7") pod "080e7e44-3898-4cec-a490-2cf6f2fa3021" (UID: "080e7e44-3898-4cec-a490-2cf6f2fa3021"). InnerVolumeSpecName "kube-api-access-rvlc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.337955 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "080e7e44-3898-4cec-a490-2cf6f2fa3021" (UID: "080e7e44-3898-4cec-a490-2cf6f2fa3021"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.418733 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.418787 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvlc7\" (UniqueName: \"kubernetes.io/projected/080e7e44-3898-4cec-a490-2cf6f2fa3021-kube-api-access-rvlc7\") on node \"crc\" DevicePath \"\"" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.418800 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080e7e44-3898-4cec-a490-2cf6f2fa3021-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.745496 4746 generic.go:334] "Generic (PLEG): container finished" podID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerID="bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6" exitCode=0 Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.745547 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjlcf" event={"ID":"080e7e44-3898-4cec-a490-2cf6f2fa3021","Type":"ContainerDied","Data":"bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6"} Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.745577 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wjlcf" event={"ID":"080e7e44-3898-4cec-a490-2cf6f2fa3021","Type":"ContainerDied","Data":"f581650e94931413e40b49fd07a6318abba60681819a38c41981205f4a856036"} Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.745575 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wjlcf" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.745595 4746 scope.go:117] "RemoveContainer" containerID="bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.769158 4746 scope.go:117] "RemoveContainer" containerID="2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.780570 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjlcf"] Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.793659 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wjlcf"] Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.805117 4746 scope.go:117] "RemoveContainer" containerID="48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.844298 4746 scope.go:117] "RemoveContainer" containerID="bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6" Dec 11 10:22:12 crc kubenswrapper[4746]: E1211 10:22:12.844963 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6\": container with ID starting with bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6 not found: ID does not exist" containerID="bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.845001 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6"} err="failed to get container status \"bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6\": rpc error: code = NotFound desc = could not find container \"bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6\": container with ID starting with bf4f770d43081a65e3b7750f28fc1c123f3df335ee516b0628ea129f2d2b85d6 not found: ID does not exist" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.845023 4746 scope.go:117] "RemoveContainer" containerID="2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae" Dec 11 10:22:12 crc kubenswrapper[4746]: E1211 10:22:12.845345 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae\": container with ID starting with 2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae not found: ID does not exist" containerID="2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.845379 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae"} err="failed to get container status \"2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae\": rpc error: code = NotFound desc = could not find container \"2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae\": container with ID starting with 2273a1d8c168e39d3b2ef0a384ab808aae2872cd00e4b527f4f16e48fd01acae not found: ID does not exist" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.845405 4746 scope.go:117] "RemoveContainer" containerID="48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09" Dec 11 10:22:12 crc kubenswrapper[4746]: E1211 10:22:12.845869 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09\": container with ID starting with 48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09 not found: ID does not exist" containerID="48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09" Dec 11 10:22:12 crc kubenswrapper[4746]: I1211 10:22:12.845926 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09"} err="failed to get container status \"48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09\": rpc error: code = NotFound desc = could not find container \"48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09\": container with ID starting with 48ee217a2dec5771a1f0fb6e57e89f411956db997a814f2de79ddcc398ccaa09 not found: ID does not exist" Dec 11 10:22:13 crc kubenswrapper[4746]: I1211 10:22:13.641984 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080e7e44-3898-4cec-a490-2cf6f2fa3021" path="/var/lib/kubelet/pods/080e7e44-3898-4cec-a490-2cf6f2fa3021/volumes" Dec 11 10:22:24 crc kubenswrapper[4746]: I1211 10:22:24.059481 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-j82k7"] Dec 11 10:22:24 crc kubenswrapper[4746]: I1211 10:22:24.077026 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-fh8cg"] Dec 11 10:22:24 crc kubenswrapper[4746]: I1211 10:22:24.090243 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-j82k7"] Dec 11 10:22:24 crc kubenswrapper[4746]: I1211 10:22:24.103900 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-fh8cg"] Dec 11 10:22:25 crc kubenswrapper[4746]: I1211 10:22:25.630214 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:22:25 crc kubenswrapper[4746]: E1211 10:22:25.630576 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:22:25 crc kubenswrapper[4746]: I1211 10:22:25.643040 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4daad880-1f8c-4f37-b718-b8b9eb88d0f3" path="/var/lib/kubelet/pods/4daad880-1f8c-4f37-b718-b8b9eb88d0f3/volumes" Dec 11 10:22:25 crc kubenswrapper[4746]: I1211 10:22:25.643641 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce3e7bb-063f-4e71-b1d7-e3a14e1a9983" path="/var/lib/kubelet/pods/bce3e7bb-063f-4e71-b1d7-e3a14e1a9983/volumes" Dec 11 10:22:29 crc kubenswrapper[4746]: I1211 10:22:29.049224 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-m9x6s"] Dec 11 10:22:29 crc kubenswrapper[4746]: I1211 10:22:29.058190 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-m9x6s"] Dec 11 10:22:29 crc kubenswrapper[4746]: I1211 10:22:29.646963 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9962918f-3f76-42ae-b292-0c2300106516" path="/var/lib/kubelet/pods/9962918f-3f76-42ae-b292-0c2300106516/volumes" Dec 11 10:22:32 crc kubenswrapper[4746]: I1211 10:22:32.028956 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-nxg7v"] Dec 11 10:22:32 crc kubenswrapper[4746]: I1211 10:22:32.038750 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-nxg7v"] Dec 11 10:22:33 crc kubenswrapper[4746]: I1211 10:22:33.645572 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbcfe442-59c5-4c0e-b051-9ea04f8127b3" path="/var/lib/kubelet/pods/dbcfe442-59c5-4c0e-b051-9ea04f8127b3/volumes" Dec 11 10:22:38 crc kubenswrapper[4746]: I1211 10:22:38.630506 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:22:38 crc kubenswrapper[4746]: E1211 10:22:38.631673 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:22:52 crc kubenswrapper[4746]: I1211 10:22:52.630779 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:22:52 crc kubenswrapper[4746]: E1211 10:22:52.631804 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:22:55 crc kubenswrapper[4746]: I1211 10:22:55.066965 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-lznjj"] Dec 11 10:22:55 crc kubenswrapper[4746]: I1211 10:22:55.086516 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-lznjj"] Dec 11 10:22:55 crc kubenswrapper[4746]: I1211 10:22:55.651209 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc" path="/var/lib/kubelet/pods/c5e1ebd4-a137-4ff8-93a1-bab90cb2f8cc/volumes" Dec 11 10:22:57 crc kubenswrapper[4746]: I1211 10:22:57.036401 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-4n4rh"] Dec 11 10:22:57 crc kubenswrapper[4746]: I1211 10:22:57.048259 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-4n4rh"] Dec 11 10:22:57 crc kubenswrapper[4746]: I1211 10:22:57.647911 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4080f404-aff6-42e2-856c-5b347b908963" path="/var/lib/kubelet/pods/4080f404-aff6-42e2-856c-5b347b908963/volumes" Dec 11 10:23:03 crc kubenswrapper[4746]: I1211 10:23:03.049650 4746 scope.go:117] "RemoveContainer" containerID="d539efced0f07c5cf02199c4e927d7a28b81ada0b57c67dc11a5d0678f447880" Dec 11 10:23:03 crc kubenswrapper[4746]: I1211 10:23:03.090843 4746 scope.go:117] "RemoveContainer" containerID="60ae15292efbbacd5588fccd317e73efd7b4778c93ec9d786e7ee9d557a4f772" Dec 11 10:23:03 crc kubenswrapper[4746]: I1211 10:23:03.151594 4746 scope.go:117] "RemoveContainer" containerID="989b35036cb1856e1131040cced5f7c6e9934632450908e0e37ed82485ea0cbf" Dec 11 10:23:03 crc kubenswrapper[4746]: I1211 10:23:03.195884 4746 scope.go:117] "RemoveContainer" containerID="eafa096af237c54a93ce9c9f67f35f86f217b42bfc9f590f905ee5ddfc6b61df" Dec 11 10:23:03 crc kubenswrapper[4746]: I1211 10:23:03.236324 4746 scope.go:117] "RemoveContainer" containerID="70cf642fe694e74c9df879731c530d62d8f3366c2fdf07e899d4eb6446bc8cbb" Dec 11 10:23:03 crc kubenswrapper[4746]: I1211 10:23:03.272144 4746 scope.go:117] "RemoveContainer" containerID="cd8fbfb6952d9f93570bbee8cf55a8f0b225b52bc83eb395b716d78cffae9157" Dec 11 10:23:07 crc kubenswrapper[4746]: I1211 10:23:07.635948 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:23:07 crc kubenswrapper[4746]: E1211 10:23:07.636713 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:23:18 crc kubenswrapper[4746]: I1211 10:23:18.646481 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:23:18 crc kubenswrapper[4746]: E1211 10:23:18.647546 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:23:21 crc kubenswrapper[4746]: I1211 10:23:21.566890 4746 generic.go:334] "Generic (PLEG): container finished" podID="87d58bd7-f602-4d6b-b16c-1178233ebe3f" containerID="0cf4e97c173c53f1baa08b05d5a3e046f0b1f87b4988ba5e7726cad6105bda4c" exitCode=0 Dec 11 10:23:21 crc kubenswrapper[4746]: I1211 10:23:21.566958 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" event={"ID":"87d58bd7-f602-4d6b-b16c-1178233ebe3f","Type":"ContainerDied","Data":"0cf4e97c173c53f1baa08b05d5a3e046f0b1f87b4988ba5e7726cad6105bda4c"} Dec 11 10:23:22 crc kubenswrapper[4746]: I1211 10:23:22.992202 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.145146 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-inventory\") pod \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.145240 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-ssh-key\") pod \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.145410 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xp4h\" (UniqueName: \"kubernetes.io/projected/87d58bd7-f602-4d6b-b16c-1178233ebe3f-kube-api-access-9xp4h\") pod \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\" (UID: \"87d58bd7-f602-4d6b-b16c-1178233ebe3f\") " Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.158302 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d58bd7-f602-4d6b-b16c-1178233ebe3f-kube-api-access-9xp4h" (OuterVolumeSpecName: "kube-api-access-9xp4h") pod "87d58bd7-f602-4d6b-b16c-1178233ebe3f" (UID: "87d58bd7-f602-4d6b-b16c-1178233ebe3f"). InnerVolumeSpecName "kube-api-access-9xp4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.181781 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "87d58bd7-f602-4d6b-b16c-1178233ebe3f" (UID: "87d58bd7-f602-4d6b-b16c-1178233ebe3f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.199562 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-inventory" (OuterVolumeSpecName: "inventory") pod "87d58bd7-f602-4d6b-b16c-1178233ebe3f" (UID: "87d58bd7-f602-4d6b-b16c-1178233ebe3f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.248352 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.248407 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/87d58bd7-f602-4d6b-b16c-1178233ebe3f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.248420 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xp4h\" (UniqueName: \"kubernetes.io/projected/87d58bd7-f602-4d6b-b16c-1178233ebe3f-kube-api-access-9xp4h\") on node \"crc\" DevicePath \"\"" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.590890 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" event={"ID":"87d58bd7-f602-4d6b-b16c-1178233ebe3f","Type":"ContainerDied","Data":"54ea539bf92f2f8ceabfdd33e7c1c0243bfe22aed9ff23f317b6f17721757313"} Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.591320 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ea539bf92f2f8ceabfdd33e7c1c0243bfe22aed9ff23f317b6f17721757313" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.590997 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.697743 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm"] Dec 11 10:23:23 crc kubenswrapper[4746]: E1211 10:23:23.698311 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerName="extract-utilities" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.698336 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerName="extract-utilities" Dec 11 10:23:23 crc kubenswrapper[4746]: E1211 10:23:23.698363 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerName="registry-server" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.698370 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerName="registry-server" Dec 11 10:23:23 crc kubenswrapper[4746]: E1211 10:23:23.698390 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d58bd7-f602-4d6b-b16c-1178233ebe3f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.698404 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d58bd7-f602-4d6b-b16c-1178233ebe3f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 11 10:23:23 crc kubenswrapper[4746]: E1211 10:23:23.698415 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerName="extract-content" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.698421 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerName="extract-content" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.698619 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d58bd7-f602-4d6b-b16c-1178233ebe3f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.698642 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="080e7e44-3898-4cec-a490-2cf6f2fa3021" containerName="registry-server" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.699604 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.704018 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.704399 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.704542 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.706475 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.722981 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm"] Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.762147 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.762200 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9blv6\" (UniqueName: \"kubernetes.io/projected/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-kube-api-access-9blv6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.762325 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.866582 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.866646 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9blv6\" (UniqueName: \"kubernetes.io/projected/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-kube-api-access-9blv6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.866817 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.869947 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.870335 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:23 crc kubenswrapper[4746]: I1211 10:23:23.888137 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9blv6\" (UniqueName: \"kubernetes.io/projected/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-kube-api-access-9blv6\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:24 crc kubenswrapper[4746]: I1211 10:23:24.022400 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:23:24 crc kubenswrapper[4746]: I1211 10:23:24.572799 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm"] Dec 11 10:23:24 crc kubenswrapper[4746]: I1211 10:23:24.609010 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" event={"ID":"f6fdd767-cd5e-4858-9c19-ebc73fd789d4","Type":"ContainerStarted","Data":"829eec46a6b34866a578fd37748d4ef450a2dc1310c93e9e9d80f2ede10d9e53"} Dec 11 10:23:27 crc kubenswrapper[4746]: I1211 10:23:27.650544 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" event={"ID":"f6fdd767-cd5e-4858-9c19-ebc73fd789d4","Type":"ContainerStarted","Data":"f71724d3213796b8db22777f6ee9294e0fa424212f74f4f46cb369920f53ac71"} Dec 11 10:23:27 crc kubenswrapper[4746]: I1211 10:23:27.680512 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" podStartSLOduration=2.931202636 podStartE2EDuration="4.680490236s" podCreationTimestamp="2025-12-11 10:23:23 +0000 UTC" firstStartedPulling="2025-12-11 10:23:24.580635996 +0000 UTC m=+1777.440499309" lastFinishedPulling="2025-12-11 10:23:26.329923596 +0000 UTC m=+1779.189786909" observedRunningTime="2025-12-11 10:23:27.671683198 +0000 UTC m=+1780.531546521" watchObservedRunningTime="2025-12-11 10:23:27.680490236 +0000 UTC m=+1780.540353549" Dec 11 10:23:29 crc kubenswrapper[4746]: I1211 10:23:29.630755 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:23:29 crc kubenswrapper[4746]: E1211 10:23:29.631419 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:23:40 crc kubenswrapper[4746]: I1211 10:23:40.629966 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:23:40 crc kubenswrapper[4746]: E1211 10:23:40.630903 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.049283 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-vxrd2"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.066450 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3ba3-account-create-update-lztww"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.076699 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-757q5"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.088301 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3ba3-account-create-update-lztww"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.096415 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-2f33-account-create-update-4plh6"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.104079 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-pbrhh"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.111659 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-vxrd2"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.120812 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-626c-account-create-update-46ncn"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.128302 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-2f33-account-create-update-4plh6"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.136015 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-pbrhh"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.143469 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-757q5"] Dec 11 10:23:42 crc kubenswrapper[4746]: I1211 10:23:42.149881 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-626c-account-create-update-46ncn"] Dec 11 10:23:43 crc kubenswrapper[4746]: I1211 10:23:43.643782 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041f7301-c875-4ebf-a917-6462c50316ce" path="/var/lib/kubelet/pods/041f7301-c875-4ebf-a917-6462c50316ce/volumes" Dec 11 10:23:43 crc kubenswrapper[4746]: I1211 10:23:43.644603 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19ff8d66-0de2-4a8f-977a-810857fc5103" path="/var/lib/kubelet/pods/19ff8d66-0de2-4a8f-977a-810857fc5103/volumes" Dec 11 10:23:43 crc kubenswrapper[4746]: I1211 10:23:43.645406 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da4dce39-8df9-4243-9c8e-268f8f662c97" path="/var/lib/kubelet/pods/da4dce39-8df9-4243-9c8e-268f8f662c97/volumes" Dec 11 10:23:43 crc kubenswrapper[4746]: I1211 10:23:43.645977 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8bce084-6da4-4c08-804f-1dd5d7cfdd8a" path="/var/lib/kubelet/pods/e8bce084-6da4-4c08-804f-1dd5d7cfdd8a/volumes" Dec 11 10:23:43 crc kubenswrapper[4746]: I1211 10:23:43.647140 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f531d95a-9fa0-4897-bdea-ee3e43914203" path="/var/lib/kubelet/pods/f531d95a-9fa0-4897-bdea-ee3e43914203/volumes" Dec 11 10:23:43 crc kubenswrapper[4746]: I1211 10:23:43.647725 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb138237-45b8-4bd9-a20d-0125fbac9770" path="/var/lib/kubelet/pods/fb138237-45b8-4bd9-a20d-0125fbac9770/volumes" Dec 11 10:23:54 crc kubenswrapper[4746]: I1211 10:23:54.630804 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:23:54 crc kubenswrapper[4746]: E1211 10:23:54.631723 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:24:03 crc kubenswrapper[4746]: I1211 10:24:03.482118 4746 scope.go:117] "RemoveContainer" containerID="911ecaedc7f0b2cd2dac2054b4ebc17c47bd5e59baf721d3d50cfec2dc01e3a1" Dec 11 10:24:03 crc kubenswrapper[4746]: I1211 10:24:03.516378 4746 scope.go:117] "RemoveContainer" containerID="952fd9a208e4982afaf0814269e62ab337a99f62e259927b756f734180a87d31" Dec 11 10:24:03 crc kubenswrapper[4746]: I1211 10:24:03.563195 4746 scope.go:117] "RemoveContainer" containerID="18ed54230b33fbd7c3567795524ead5403f88f23ef97cbc043ce39c506eff7eb" Dec 11 10:24:03 crc kubenswrapper[4746]: I1211 10:24:03.634177 4746 scope.go:117] "RemoveContainer" containerID="a64766f15ac7662c2bbcf7258aa33fbe77259b960f0fc146f2021e7c9ae1e52d" Dec 11 10:24:03 crc kubenswrapper[4746]: I1211 10:24:03.658080 4746 scope.go:117] "RemoveContainer" containerID="37233fdaed4ebeddbea6deeb96029bb2921b9a98cb76a888ec67b249eaed0962" Dec 11 10:24:03 crc kubenswrapper[4746]: I1211 10:24:03.712657 4746 scope.go:117] "RemoveContainer" containerID="f7f3521ec299e4cfbd209fb3e292113d40932640b9bd11742f7c25d17eeecd8b" Dec 11 10:24:05 crc kubenswrapper[4746]: I1211 10:24:05.638724 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:24:05 crc kubenswrapper[4746]: E1211 10:24:05.639698 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:24:16 crc kubenswrapper[4746]: I1211 10:24:16.050462 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f6gcs"] Dec 11 10:24:16 crc kubenswrapper[4746]: I1211 10:24:16.077960 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-f6gcs"] Dec 11 10:24:17 crc kubenswrapper[4746]: I1211 10:24:17.653897 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53396ba8-39b2-43dc-a0d3-acef5fb61cda" path="/var/lib/kubelet/pods/53396ba8-39b2-43dc-a0d3-acef5fb61cda/volumes" Dec 11 10:24:18 crc kubenswrapper[4746]: I1211 10:24:18.631360 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:24:18 crc kubenswrapper[4746]: E1211 10:24:18.634017 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:24:29 crc kubenswrapper[4746]: I1211 10:24:29.630310 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:24:29 crc kubenswrapper[4746]: E1211 10:24:29.631063 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:24:40 crc kubenswrapper[4746]: I1211 10:24:40.554797 4746 generic.go:334] "Generic (PLEG): container finished" podID="f6fdd767-cd5e-4858-9c19-ebc73fd789d4" containerID="f71724d3213796b8db22777f6ee9294e0fa424212f74f4f46cb369920f53ac71" exitCode=0 Dec 11 10:24:40 crc kubenswrapper[4746]: I1211 10:24:40.554872 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" event={"ID":"f6fdd767-cd5e-4858-9c19-ebc73fd789d4","Type":"ContainerDied","Data":"f71724d3213796b8db22777f6ee9294e0fa424212f74f4f46cb369920f53ac71"} Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.108875 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.224733 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-inventory\") pod \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.225117 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-ssh-key\") pod \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.225247 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9blv6\" (UniqueName: \"kubernetes.io/projected/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-kube-api-access-9blv6\") pod \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\" (UID: \"f6fdd767-cd5e-4858-9c19-ebc73fd789d4\") " Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.233960 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-kube-api-access-9blv6" (OuterVolumeSpecName: "kube-api-access-9blv6") pod "f6fdd767-cd5e-4858-9c19-ebc73fd789d4" (UID: "f6fdd767-cd5e-4858-9c19-ebc73fd789d4"). InnerVolumeSpecName "kube-api-access-9blv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.264036 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f6fdd767-cd5e-4858-9c19-ebc73fd789d4" (UID: "f6fdd767-cd5e-4858-9c19-ebc73fd789d4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.274475 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-inventory" (OuterVolumeSpecName: "inventory") pod "f6fdd767-cd5e-4858-9c19-ebc73fd789d4" (UID: "f6fdd767-cd5e-4858-9c19-ebc73fd789d4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.327936 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9blv6\" (UniqueName: \"kubernetes.io/projected/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-kube-api-access-9blv6\") on node \"crc\" DevicePath \"\"" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.327983 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.327993 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f6fdd767-cd5e-4858-9c19-ebc73fd789d4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.582606 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" event={"ID":"f6fdd767-cd5e-4858-9c19-ebc73fd789d4","Type":"ContainerDied","Data":"829eec46a6b34866a578fd37748d4ef450a2dc1310c93e9e9d80f2ede10d9e53"} Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.582669 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="829eec46a6b34866a578fd37748d4ef450a2dc1310c93e9e9d80f2ede10d9e53" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.582750 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.631368 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:24:42 crc kubenswrapper[4746]: E1211 10:24:42.631774 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.703285 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t"] Dec 11 10:24:42 crc kubenswrapper[4746]: E1211 10:24:42.704496 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6fdd767-cd5e-4858-9c19-ebc73fd789d4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.704556 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6fdd767-cd5e-4858-9c19-ebc73fd789d4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.705068 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6fdd767-cd5e-4858-9c19-ebc73fd789d4" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.706187 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.712011 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.712306 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.713564 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.719724 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.719851 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t"] Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.840381 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mw75t\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.840633 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nxqv\" (UniqueName: \"kubernetes.io/projected/0d5a90b1-c946-4b31-9337-9b13d58f9819-kube-api-access-9nxqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mw75t\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.840736 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mw75t\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.943682 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nxqv\" (UniqueName: \"kubernetes.io/projected/0d5a90b1-c946-4b31-9337-9b13d58f9819-kube-api-access-9nxqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mw75t\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.943851 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mw75t\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.943949 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mw75t\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.949000 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mw75t\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.956908 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mw75t\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:42 crc kubenswrapper[4746]: I1211 10:24:42.966454 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nxqv\" (UniqueName: \"kubernetes.io/projected/0d5a90b1-c946-4b31-9337-9b13d58f9819-kube-api-access-9nxqv\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mw75t\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:43 crc kubenswrapper[4746]: I1211 10:24:43.038036 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:43 crc kubenswrapper[4746]: I1211 10:24:43.581208 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t"] Dec 11 10:24:44 crc kubenswrapper[4746]: I1211 10:24:44.051899 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-zhfc5"] Dec 11 10:24:44 crc kubenswrapper[4746]: I1211 10:24:44.063347 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-zhfc5"] Dec 11 10:24:44 crc kubenswrapper[4746]: I1211 10:24:44.609179 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" event={"ID":"0d5a90b1-c946-4b31-9337-9b13d58f9819","Type":"ContainerStarted","Data":"3c0beea6a5990157dd283e196badb7c8b265d9d5e48d46e2929766d0ba49ccca"} Dec 11 10:24:45 crc kubenswrapper[4746]: I1211 10:24:45.623089 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" event={"ID":"0d5a90b1-c946-4b31-9337-9b13d58f9819","Type":"ContainerStarted","Data":"c7b9c861f5e8f1665137d41c1710eb3eff84ea952c6de674688d02039dd9dd45"} Dec 11 10:24:45 crc kubenswrapper[4746]: I1211 10:24:45.644010 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" podStartSLOduration=3.050745818 podStartE2EDuration="3.643990865s" podCreationTimestamp="2025-12-11 10:24:42 +0000 UTC" firstStartedPulling="2025-12-11 10:24:43.619849167 +0000 UTC m=+1856.479712480" lastFinishedPulling="2025-12-11 10:24:44.213094214 +0000 UTC m=+1857.072957527" observedRunningTime="2025-12-11 10:24:45.638899718 +0000 UTC m=+1858.498763041" watchObservedRunningTime="2025-12-11 10:24:45.643990865 +0000 UTC m=+1858.503854178" Dec 11 10:24:45 crc kubenswrapper[4746]: I1211 10:24:45.651101 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67204404-706d-4886-bb9d-ffa996f7bd90" path="/var/lib/kubelet/pods/67204404-706d-4886-bb9d-ffa996f7bd90/volumes" Dec 11 10:24:49 crc kubenswrapper[4746]: I1211 10:24:49.673825 4746 generic.go:334] "Generic (PLEG): container finished" podID="0d5a90b1-c946-4b31-9337-9b13d58f9819" containerID="c7b9c861f5e8f1665137d41c1710eb3eff84ea952c6de674688d02039dd9dd45" exitCode=0 Dec 11 10:24:49 crc kubenswrapper[4746]: I1211 10:24:49.673914 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" event={"ID":"0d5a90b1-c946-4b31-9337-9b13d58f9819","Type":"ContainerDied","Data":"c7b9c861f5e8f1665137d41c1710eb3eff84ea952c6de674688d02039dd9dd45"} Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.047603 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hx64l"] Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.064374 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-hx64l"] Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.128340 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.274399 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nxqv\" (UniqueName: \"kubernetes.io/projected/0d5a90b1-c946-4b31-9337-9b13d58f9819-kube-api-access-9nxqv\") pod \"0d5a90b1-c946-4b31-9337-9b13d58f9819\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.274793 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-ssh-key\") pod \"0d5a90b1-c946-4b31-9337-9b13d58f9819\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.274832 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-inventory\") pod \"0d5a90b1-c946-4b31-9337-9b13d58f9819\" (UID: \"0d5a90b1-c946-4b31-9337-9b13d58f9819\") " Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.298391 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d5a90b1-c946-4b31-9337-9b13d58f9819-kube-api-access-9nxqv" (OuterVolumeSpecName: "kube-api-access-9nxqv") pod "0d5a90b1-c946-4b31-9337-9b13d58f9819" (UID: "0d5a90b1-c946-4b31-9337-9b13d58f9819"). InnerVolumeSpecName "kube-api-access-9nxqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.324314 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-inventory" (OuterVolumeSpecName: "inventory") pod "0d5a90b1-c946-4b31-9337-9b13d58f9819" (UID: "0d5a90b1-c946-4b31-9337-9b13d58f9819"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.341019 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0d5a90b1-c946-4b31-9337-9b13d58f9819" (UID: "0d5a90b1-c946-4b31-9337-9b13d58f9819"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.378847 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.378883 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0d5a90b1-c946-4b31-9337-9b13d58f9819-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.378896 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nxqv\" (UniqueName: \"kubernetes.io/projected/0d5a90b1-c946-4b31-9337-9b13d58f9819-kube-api-access-9nxqv\") on node \"crc\" DevicePath \"\"" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.695768 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94ccb677-c026-43d0-8a64-b5267ee040e3" path="/var/lib/kubelet/pods/94ccb677-c026-43d0-8a64-b5267ee040e3/volumes" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.699296 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" event={"ID":"0d5a90b1-c946-4b31-9337-9b13d58f9819","Type":"ContainerDied","Data":"3c0beea6a5990157dd283e196badb7c8b265d9d5e48d46e2929766d0ba49ccca"} Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.699346 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c0beea6a5990157dd283e196badb7c8b265d9d5e48d46e2929766d0ba49ccca" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.699365 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mw75t" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.836327 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts"] Dec 11 10:24:51 crc kubenswrapper[4746]: E1211 10:24:51.836909 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d5a90b1-c946-4b31-9337-9b13d58f9819" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.836935 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d5a90b1-c946-4b31-9337-9b13d58f9819" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.837223 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d5a90b1-c946-4b31-9337-9b13d58f9819" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.838322 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.841850 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.846295 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.846436 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.847731 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.850448 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts"] Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.896794 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n69p\" (UniqueName: \"kubernetes.io/projected/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-kube-api-access-9n69p\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-94sts\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.897736 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-94sts\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:51 crc kubenswrapper[4746]: I1211 10:24:51.897898 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-94sts\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:52 crc kubenswrapper[4746]: I1211 10:24:52.000231 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n69p\" (UniqueName: \"kubernetes.io/projected/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-kube-api-access-9n69p\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-94sts\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:52 crc kubenswrapper[4746]: I1211 10:24:52.000348 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-94sts\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:52 crc kubenswrapper[4746]: I1211 10:24:52.000385 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-94sts\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:52 crc kubenswrapper[4746]: I1211 10:24:52.005036 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-94sts\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:52 crc kubenswrapper[4746]: I1211 10:24:52.005661 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-94sts\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:52 crc kubenswrapper[4746]: I1211 10:24:52.019761 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n69p\" (UniqueName: \"kubernetes.io/projected/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-kube-api-access-9n69p\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-94sts\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:52 crc kubenswrapper[4746]: I1211 10:24:52.160639 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:24:52 crc kubenswrapper[4746]: I1211 10:24:52.761213 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts"] Dec 11 10:24:53 crc kubenswrapper[4746]: I1211 10:24:53.717887 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" event={"ID":"9a09e7c3-6aed-4155-bbe9-7be9b885cd57","Type":"ContainerStarted","Data":"a0becb4d0b647b838e1d1cee2d898f35239ef9364fe7e766b27d78ec9b894311"} Dec 11 10:24:53 crc kubenswrapper[4746]: I1211 10:24:53.718441 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" event={"ID":"9a09e7c3-6aed-4155-bbe9-7be9b885cd57","Type":"ContainerStarted","Data":"ff286842df3e69b629d2599729858c1a9cfea7665e51a561e2a06fadafb7c204"} Dec 11 10:24:53 crc kubenswrapper[4746]: I1211 10:24:53.746791 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" podStartSLOduration=2.2490943899999998 podStartE2EDuration="2.746773715s" podCreationTimestamp="2025-12-11 10:24:51 +0000 UTC" firstStartedPulling="2025-12-11 10:24:52.766629495 +0000 UTC m=+1865.626492808" lastFinishedPulling="2025-12-11 10:24:53.26430882 +0000 UTC m=+1866.124172133" observedRunningTime="2025-12-11 10:24:53.731198156 +0000 UTC m=+1866.591061469" watchObservedRunningTime="2025-12-11 10:24:53.746773715 +0000 UTC m=+1866.606637028" Dec 11 10:24:56 crc kubenswrapper[4746]: I1211 10:24:56.630936 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:24:56 crc kubenswrapper[4746]: E1211 10:24:56.631471 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:25:03 crc kubenswrapper[4746]: I1211 10:25:03.855764 4746 scope.go:117] "RemoveContainer" containerID="eb098b66a9da4d07dec346ad9235a1cfd53ae43ab88fc6d48ec21336c2a14f03" Dec 11 10:25:03 crc kubenswrapper[4746]: I1211 10:25:03.907247 4746 scope.go:117] "RemoveContainer" containerID="d5e6c802dab5139d15be3abb99132f22bf2e707a3b49fc27c041e4359c1e3d2b" Dec 11 10:25:03 crc kubenswrapper[4746]: I1211 10:25:03.955762 4746 scope.go:117] "RemoveContainer" containerID="3c29ae587a217519d5f98e11075e05e86d821c2579c36eab6200220db20a34eb" Dec 11 10:25:07 crc kubenswrapper[4746]: I1211 10:25:07.901544 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:25:07 crc kubenswrapper[4746]: E1211 10:25:07.902276 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:25:21 crc kubenswrapper[4746]: I1211 10:25:21.631080 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:25:21 crc kubenswrapper[4746]: E1211 10:25:21.632384 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:25:29 crc kubenswrapper[4746]: I1211 10:25:29.058804 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-rtspz"] Dec 11 10:25:29 crc kubenswrapper[4746]: I1211 10:25:29.071204 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-rtspz"] Dec 11 10:25:29 crc kubenswrapper[4746]: I1211 10:25:29.642812 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca" path="/var/lib/kubelet/pods/e6aa3845-a29e-4f96-b04a-d3ac1ffad8ca/volumes" Dec 11 10:25:33 crc kubenswrapper[4746]: I1211 10:25:33.180612 4746 generic.go:334] "Generic (PLEG): container finished" podID="9a09e7c3-6aed-4155-bbe9-7be9b885cd57" containerID="a0becb4d0b647b838e1d1cee2d898f35239ef9364fe7e766b27d78ec9b894311" exitCode=0 Dec 11 10:25:33 crc kubenswrapper[4746]: I1211 10:25:33.180642 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" event={"ID":"9a09e7c3-6aed-4155-bbe9-7be9b885cd57","Type":"ContainerDied","Data":"a0becb4d0b647b838e1d1cee2d898f35239ef9364fe7e766b27d78ec9b894311"} Dec 11 10:25:34 crc kubenswrapper[4746]: I1211 10:25:34.641195 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:25:34 crc kubenswrapper[4746]: I1211 10:25:34.712890 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-inventory\") pod \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " Dec 11 10:25:34 crc kubenswrapper[4746]: I1211 10:25:34.713214 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n69p\" (UniqueName: \"kubernetes.io/projected/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-kube-api-access-9n69p\") pod \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " Dec 11 10:25:34 crc kubenswrapper[4746]: I1211 10:25:34.713451 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-ssh-key\") pod \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\" (UID: \"9a09e7c3-6aed-4155-bbe9-7be9b885cd57\") " Dec 11 10:25:34 crc kubenswrapper[4746]: I1211 10:25:34.719677 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-kube-api-access-9n69p" (OuterVolumeSpecName: "kube-api-access-9n69p") pod "9a09e7c3-6aed-4155-bbe9-7be9b885cd57" (UID: "9a09e7c3-6aed-4155-bbe9-7be9b885cd57"). InnerVolumeSpecName "kube-api-access-9n69p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:25:34 crc kubenswrapper[4746]: I1211 10:25:34.746558 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-inventory" (OuterVolumeSpecName: "inventory") pod "9a09e7c3-6aed-4155-bbe9-7be9b885cd57" (UID: "9a09e7c3-6aed-4155-bbe9-7be9b885cd57"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:25:34 crc kubenswrapper[4746]: I1211 10:25:34.750536 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "9a09e7c3-6aed-4155-bbe9-7be9b885cd57" (UID: "9a09e7c3-6aed-4155-bbe9-7be9b885cd57"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:25:34 crc kubenswrapper[4746]: I1211 10:25:34.819593 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:34 crc kubenswrapper[4746]: I1211 10:25:34.819772 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:34 crc kubenswrapper[4746]: I1211 10:25:34.819961 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n69p\" (UniqueName: \"kubernetes.io/projected/9a09e7c3-6aed-4155-bbe9-7be9b885cd57-kube-api-access-9n69p\") on node \"crc\" DevicePath \"\"" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.201534 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" event={"ID":"9a09e7c3-6aed-4155-bbe9-7be9b885cd57","Type":"ContainerDied","Data":"ff286842df3e69b629d2599729858c1a9cfea7665e51a561e2a06fadafb7c204"} Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.201571 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-94sts" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.201578 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff286842df3e69b629d2599729858c1a9cfea7665e51a561e2a06fadafb7c204" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.290023 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2"] Dec 11 10:25:35 crc kubenswrapper[4746]: E1211 10:25:35.290540 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a09e7c3-6aed-4155-bbe9-7be9b885cd57" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.290564 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a09e7c3-6aed-4155-bbe9-7be9b885cd57" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.290815 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a09e7c3-6aed-4155-bbe9-7be9b885cd57" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.291692 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.293822 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.293906 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.294550 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.295987 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.304569 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2"] Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.329680 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpf8q\" (UniqueName: \"kubernetes.io/projected/797c27c3-e8d6-4324-926e-e3b859e05b51-kube-api-access-kpf8q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.329755 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.329885 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.432265 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpf8q\" (UniqueName: \"kubernetes.io/projected/797c27c3-e8d6-4324-926e-e3b859e05b51-kube-api-access-kpf8q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.432342 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.432438 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.438630 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.442703 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.455159 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpf8q\" (UniqueName: \"kubernetes.io/projected/797c27c3-e8d6-4324-926e-e3b859e05b51-kube-api-access-kpf8q\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:35 crc kubenswrapper[4746]: I1211 10:25:35.659411 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:25:36 crc kubenswrapper[4746]: I1211 10:25:36.197542 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2"] Dec 11 10:25:36 crc kubenswrapper[4746]: I1211 10:25:36.201379 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:25:36 crc kubenswrapper[4746]: I1211 10:25:36.214504 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" event={"ID":"797c27c3-e8d6-4324-926e-e3b859e05b51","Type":"ContainerStarted","Data":"079240b0c7161cc855021e85dae6c4e32cb4792fe5374d7dedae08b7fa78f33c"} Dec 11 10:25:36 crc kubenswrapper[4746]: I1211 10:25:36.630137 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:25:36 crc kubenswrapper[4746]: E1211 10:25:36.630586 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:25:37 crc kubenswrapper[4746]: I1211 10:25:37.227165 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" event={"ID":"797c27c3-e8d6-4324-926e-e3b859e05b51","Type":"ContainerStarted","Data":"a57bf654834af9d0aea99d1b4d8c4038df56d9ab7997270e38ce86dfa70df2d8"} Dec 11 10:25:37 crc kubenswrapper[4746]: I1211 10:25:37.259677 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" podStartSLOduration=1.8241821360000001 podStartE2EDuration="2.259395694s" podCreationTimestamp="2025-12-11 10:25:35 +0000 UTC" firstStartedPulling="2025-12-11 10:25:36.201174391 +0000 UTC m=+1909.061037704" lastFinishedPulling="2025-12-11 10:25:36.636387949 +0000 UTC m=+1909.496251262" observedRunningTime="2025-12-11 10:25:37.24842805 +0000 UTC m=+1910.108291363" watchObservedRunningTime="2025-12-11 10:25:37.259395694 +0000 UTC m=+1910.119259017" Dec 11 10:25:49 crc kubenswrapper[4746]: I1211 10:25:49.631563 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:25:49 crc kubenswrapper[4746]: E1211 10:25:49.632420 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:26:04 crc kubenswrapper[4746]: I1211 10:26:04.059838 4746 scope.go:117] "RemoveContainer" containerID="21bcddad0e871d6889940a9b8d1dd13c838566ca365d8a6b601f889ecfc780b3" Dec 11 10:26:04 crc kubenswrapper[4746]: I1211 10:26:04.630541 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:26:04 crc kubenswrapper[4746]: E1211 10:26:04.631178 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:26:16 crc kubenswrapper[4746]: I1211 10:26:16.630534 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:26:16 crc kubenswrapper[4746]: E1211 10:26:16.631295 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:26:28 crc kubenswrapper[4746]: I1211 10:26:28.631499 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:26:28 crc kubenswrapper[4746]: E1211 10:26:28.638785 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:26:29 crc kubenswrapper[4746]: I1211 10:26:29.482568 4746 generic.go:334] "Generic (PLEG): container finished" podID="797c27c3-e8d6-4324-926e-e3b859e05b51" containerID="a57bf654834af9d0aea99d1b4d8c4038df56d9ab7997270e38ce86dfa70df2d8" exitCode=0 Dec 11 10:26:29 crc kubenswrapper[4746]: I1211 10:26:29.482636 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" event={"ID":"797c27c3-e8d6-4324-926e-e3b859e05b51","Type":"ContainerDied","Data":"a57bf654834af9d0aea99d1b4d8c4038df56d9ab7997270e38ce86dfa70df2d8"} Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.025485 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.097722 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpf8q\" (UniqueName: \"kubernetes.io/projected/797c27c3-e8d6-4324-926e-e3b859e05b51-kube-api-access-kpf8q\") pod \"797c27c3-e8d6-4324-926e-e3b859e05b51\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.097969 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-ssh-key\") pod \"797c27c3-e8d6-4324-926e-e3b859e05b51\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.098291 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-inventory\") pod \"797c27c3-e8d6-4324-926e-e3b859e05b51\" (UID: \"797c27c3-e8d6-4324-926e-e3b859e05b51\") " Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.106146 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797c27c3-e8d6-4324-926e-e3b859e05b51-kube-api-access-kpf8q" (OuterVolumeSpecName: "kube-api-access-kpf8q") pod "797c27c3-e8d6-4324-926e-e3b859e05b51" (UID: "797c27c3-e8d6-4324-926e-e3b859e05b51"). InnerVolumeSpecName "kube-api-access-kpf8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.135159 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-inventory" (OuterVolumeSpecName: "inventory") pod "797c27c3-e8d6-4324-926e-e3b859e05b51" (UID: "797c27c3-e8d6-4324-926e-e3b859e05b51"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.146265 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "797c27c3-e8d6-4324-926e-e3b859e05b51" (UID: "797c27c3-e8d6-4324-926e-e3b859e05b51"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.201038 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.201432 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/797c27c3-e8d6-4324-926e-e3b859e05b51-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.201724 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpf8q\" (UniqueName: \"kubernetes.io/projected/797c27c3-e8d6-4324-926e-e3b859e05b51-kube-api-access-kpf8q\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.509729 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" event={"ID":"797c27c3-e8d6-4324-926e-e3b859e05b51","Type":"ContainerDied","Data":"079240b0c7161cc855021e85dae6c4e32cb4792fe5374d7dedae08b7fa78f33c"} Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.509798 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="079240b0c7161cc855021e85dae6c4e32cb4792fe5374d7dedae08b7fa78f33c" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.509886 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.688899 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5c2w5"] Dec 11 10:26:31 crc kubenswrapper[4746]: E1211 10:26:31.689546 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797c27c3-e8d6-4324-926e-e3b859e05b51" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.689576 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="797c27c3-e8d6-4324-926e-e3b859e05b51" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.689813 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="797c27c3-e8d6-4324-926e-e3b859e05b51" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.690728 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.696990 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.697133 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.697337 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.697421 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.700952 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5c2w5"] Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.715272 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b4jn\" (UniqueName: \"kubernetes.io/projected/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-kube-api-access-4b4jn\") pod \"ssh-known-hosts-edpm-deployment-5c2w5\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.715372 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5c2w5\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.715544 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5c2w5\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.830495 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b4jn\" (UniqueName: \"kubernetes.io/projected/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-kube-api-access-4b4jn\") pod \"ssh-known-hosts-edpm-deployment-5c2w5\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.830673 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5c2w5\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.830760 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5c2w5\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.841176 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-5c2w5\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.844731 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-5c2w5\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:31 crc kubenswrapper[4746]: I1211 10:26:31.863578 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b4jn\" (UniqueName: \"kubernetes.io/projected/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-kube-api-access-4b4jn\") pod \"ssh-known-hosts-edpm-deployment-5c2w5\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:32 crc kubenswrapper[4746]: I1211 10:26:32.014656 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:32 crc kubenswrapper[4746]: I1211 10:26:32.608923 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-5c2w5"] Dec 11 10:26:33 crc kubenswrapper[4746]: I1211 10:26:33.533410 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" event={"ID":"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c","Type":"ContainerStarted","Data":"b4a79ccf600b9592cec7b824dba6fc03b438117aa1fd8e2bec9292b53c32c3da"} Dec 11 10:26:34 crc kubenswrapper[4746]: I1211 10:26:34.544030 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" event={"ID":"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c","Type":"ContainerStarted","Data":"8245f3cf1647284cefc248d087eb4e152d19f811750eead5dbbf204bf6fe9fc7"} Dec 11 10:26:34 crc kubenswrapper[4746]: I1211 10:26:34.567911 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" podStartSLOduration=2.288434437 podStartE2EDuration="3.567888528s" podCreationTimestamp="2025-12-11 10:26:31 +0000 UTC" firstStartedPulling="2025-12-11 10:26:32.619237307 +0000 UTC m=+1965.479100620" lastFinishedPulling="2025-12-11 10:26:33.898691398 +0000 UTC m=+1966.758554711" observedRunningTime="2025-12-11 10:26:34.566766037 +0000 UTC m=+1967.426629350" watchObservedRunningTime="2025-12-11 10:26:34.567888528 +0000 UTC m=+1967.427751841" Dec 11 10:26:41 crc kubenswrapper[4746]: I1211 10:26:41.615449 4746 generic.go:334] "Generic (PLEG): container finished" podID="ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c" containerID="8245f3cf1647284cefc248d087eb4e152d19f811750eead5dbbf204bf6fe9fc7" exitCode=0 Dec 11 10:26:41 crc kubenswrapper[4746]: I1211 10:26:41.615856 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" event={"ID":"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c","Type":"ContainerDied","Data":"8245f3cf1647284cefc248d087eb4e152d19f811750eead5dbbf204bf6fe9fc7"} Dec 11 10:26:41 crc kubenswrapper[4746]: I1211 10:26:41.633010 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:26:41 crc kubenswrapper[4746]: E1211 10:26:41.637433 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.063573 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.212188 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4b4jn\" (UniqueName: \"kubernetes.io/projected/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-kube-api-access-4b4jn\") pod \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.212359 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-inventory-0\") pod \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.212520 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-ssh-key-openstack-edpm-ipam\") pod \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\" (UID: \"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c\") " Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.220079 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-kube-api-access-4b4jn" (OuterVolumeSpecName: "kube-api-access-4b4jn") pod "ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c" (UID: "ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c"). InnerVolumeSpecName "kube-api-access-4b4jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.247313 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c" (UID: "ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.247819 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c" (UID: "ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.315228 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4b4jn\" (UniqueName: \"kubernetes.io/projected/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-kube-api-access-4b4jn\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.315296 4746 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.315312 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.639133 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.642410 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-5c2w5" event={"ID":"ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c","Type":"ContainerDied","Data":"b4a79ccf600b9592cec7b824dba6fc03b438117aa1fd8e2bec9292b53c32c3da"} Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.642458 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a79ccf600b9592cec7b824dba6fc03b438117aa1fd8e2bec9292b53c32c3da" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.727849 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm"] Dec 11 10:26:43 crc kubenswrapper[4746]: E1211 10:26:43.728405 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c" containerName="ssh-known-hosts-edpm-deployment" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.728430 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c" containerName="ssh-known-hosts-edpm-deployment" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.728713 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c" containerName="ssh-known-hosts-edpm-deployment" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.729615 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.731807 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.731981 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.736546 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.736546 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.749724 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm"] Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.826599 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5vqjm\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.827004 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5vqjm\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.827419 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l948g\" (UniqueName: \"kubernetes.io/projected/6da9c642-e03d-463d-a3f1-c74bb27843c2-kube-api-access-l948g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5vqjm\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.929534 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5vqjm\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.929895 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5vqjm\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.930070 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l948g\" (UniqueName: \"kubernetes.io/projected/6da9c642-e03d-463d-a3f1-c74bb27843c2-kube-api-access-l948g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5vqjm\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.933649 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5vqjm\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.944533 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5vqjm\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:43 crc kubenswrapper[4746]: I1211 10:26:43.956812 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l948g\" (UniqueName: \"kubernetes.io/projected/6da9c642-e03d-463d-a3f1-c74bb27843c2-kube-api-access-l948g\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-5vqjm\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:44 crc kubenswrapper[4746]: I1211 10:26:44.097596 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:44 crc kubenswrapper[4746]: I1211 10:26:44.489163 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm"] Dec 11 10:26:44 crc kubenswrapper[4746]: I1211 10:26:44.650886 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" event={"ID":"6da9c642-e03d-463d-a3f1-c74bb27843c2","Type":"ContainerStarted","Data":"b28ee7f7bb766d759842e7d2959ec109a575898e3a3de7c139100ebf6fb7bbb0"} Dec 11 10:26:46 crc kubenswrapper[4746]: I1211 10:26:46.673877 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" event={"ID":"6da9c642-e03d-463d-a3f1-c74bb27843c2","Type":"ContainerStarted","Data":"a9bc8bd7968f1e886583c7201683ee5b3cdf0e450516a5f2e7f6c2432b551c34"} Dec 11 10:26:46 crc kubenswrapper[4746]: I1211 10:26:46.700499 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" podStartSLOduration=2.209633023 podStartE2EDuration="3.700479276s" podCreationTimestamp="2025-12-11 10:26:43 +0000 UTC" firstStartedPulling="2025-12-11 10:26:44.498519775 +0000 UTC m=+1977.358383088" lastFinishedPulling="2025-12-11 10:26:45.989366028 +0000 UTC m=+1978.849229341" observedRunningTime="2025-12-11 10:26:46.69836176 +0000 UTC m=+1979.558225073" watchObservedRunningTime="2025-12-11 10:26:46.700479276 +0000 UTC m=+1979.560342599" Dec 11 10:26:54 crc kubenswrapper[4746]: I1211 10:26:54.630511 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:26:54 crc kubenswrapper[4746]: E1211 10:26:54.631521 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:26:56 crc kubenswrapper[4746]: I1211 10:26:56.786018 4746 generic.go:334] "Generic (PLEG): container finished" podID="6da9c642-e03d-463d-a3f1-c74bb27843c2" containerID="a9bc8bd7968f1e886583c7201683ee5b3cdf0e450516a5f2e7f6c2432b551c34" exitCode=0 Dec 11 10:26:56 crc kubenswrapper[4746]: I1211 10:26:56.786210 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" event={"ID":"6da9c642-e03d-463d-a3f1-c74bb27843c2","Type":"ContainerDied","Data":"a9bc8bd7968f1e886583c7201683ee5b3cdf0e450516a5f2e7f6c2432b551c34"} Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.253751 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.435458 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-inventory\") pod \"6da9c642-e03d-463d-a3f1-c74bb27843c2\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.435869 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l948g\" (UniqueName: \"kubernetes.io/projected/6da9c642-e03d-463d-a3f1-c74bb27843c2-kube-api-access-l948g\") pod \"6da9c642-e03d-463d-a3f1-c74bb27843c2\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.435933 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-ssh-key\") pod \"6da9c642-e03d-463d-a3f1-c74bb27843c2\" (UID: \"6da9c642-e03d-463d-a3f1-c74bb27843c2\") " Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.460397 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6da9c642-e03d-463d-a3f1-c74bb27843c2-kube-api-access-l948g" (OuterVolumeSpecName: "kube-api-access-l948g") pod "6da9c642-e03d-463d-a3f1-c74bb27843c2" (UID: "6da9c642-e03d-463d-a3f1-c74bb27843c2"). InnerVolumeSpecName "kube-api-access-l948g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.475759 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-inventory" (OuterVolumeSpecName: "inventory") pod "6da9c642-e03d-463d-a3f1-c74bb27843c2" (UID: "6da9c642-e03d-463d-a3f1-c74bb27843c2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.490706 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6da9c642-e03d-463d-a3f1-c74bb27843c2" (UID: "6da9c642-e03d-463d-a3f1-c74bb27843c2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.557953 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.557989 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l948g\" (UniqueName: \"kubernetes.io/projected/6da9c642-e03d-463d-a3f1-c74bb27843c2-kube-api-access-l948g\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.558001 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6da9c642-e03d-463d-a3f1-c74bb27843c2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.805252 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" event={"ID":"6da9c642-e03d-463d-a3f1-c74bb27843c2","Type":"ContainerDied","Data":"b28ee7f7bb766d759842e7d2959ec109a575898e3a3de7c139100ebf6fb7bbb0"} Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.805306 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b28ee7f7bb766d759842e7d2959ec109a575898e3a3de7c139100ebf6fb7bbb0" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.805360 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-5vqjm" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.895495 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw"] Dec 11 10:26:58 crc kubenswrapper[4746]: E1211 10:26:58.896081 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6da9c642-e03d-463d-a3f1-c74bb27843c2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.896103 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="6da9c642-e03d-463d-a3f1-c74bb27843c2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.896366 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="6da9c642-e03d-463d-a3f1-c74bb27843c2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.897145 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.899707 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.900259 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.901221 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.901451 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.906891 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw"] Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.966507 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.966622 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm7kx\" (UniqueName: \"kubernetes.io/projected/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-kube-api-access-sm7kx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:58 crc kubenswrapper[4746]: I1211 10:26:58.966695 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:59 crc kubenswrapper[4746]: I1211 10:26:59.068359 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm7kx\" (UniqueName: \"kubernetes.io/projected/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-kube-api-access-sm7kx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:59 crc kubenswrapper[4746]: I1211 10:26:59.068649 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:59 crc kubenswrapper[4746]: I1211 10:26:59.068792 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:59 crc kubenswrapper[4746]: I1211 10:26:59.073200 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:59 crc kubenswrapper[4746]: I1211 10:26:59.073683 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:59 crc kubenswrapper[4746]: I1211 10:26:59.093426 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm7kx\" (UniqueName: \"kubernetes.io/projected/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-kube-api-access-sm7kx\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:59 crc kubenswrapper[4746]: I1211 10:26:59.224026 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:26:59 crc kubenswrapper[4746]: I1211 10:26:59.729975 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw"] Dec 11 10:26:59 crc kubenswrapper[4746]: I1211 10:26:59.827660 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" event={"ID":"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92","Type":"ContainerStarted","Data":"d16c758b317918af85df579aa521bb30222a8828d9d7044d3d65f1a5f31bf712"} Dec 11 10:27:00 crc kubenswrapper[4746]: I1211 10:27:00.839876 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" event={"ID":"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92","Type":"ContainerStarted","Data":"c4c2911819f4ad945ef79c3891be842d24c0d0e9d69b2be973166c1431082504"} Dec 11 10:27:00 crc kubenswrapper[4746]: I1211 10:27:00.859918 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" podStartSLOduration=2.16058261 podStartE2EDuration="2.859889271s" podCreationTimestamp="2025-12-11 10:26:58 +0000 UTC" firstStartedPulling="2025-12-11 10:26:59.734907098 +0000 UTC m=+1992.594770411" lastFinishedPulling="2025-12-11 10:27:00.434213749 +0000 UTC m=+1993.294077072" observedRunningTime="2025-12-11 10:27:00.85507502 +0000 UTC m=+1993.714938333" watchObservedRunningTime="2025-12-11 10:27:00.859889271 +0000 UTC m=+1993.719752594" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.439680 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zc49l"] Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.442028 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.458758 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zc49l"] Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.604166 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-utilities\") pod \"redhat-operators-zc49l\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.604499 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-catalog-content\") pod \"redhat-operators-zc49l\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.604595 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b9qh\" (UniqueName: \"kubernetes.io/projected/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-kube-api-access-9b9qh\") pod \"redhat-operators-zc49l\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.706729 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b9qh\" (UniqueName: \"kubernetes.io/projected/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-kube-api-access-9b9qh\") pod \"redhat-operators-zc49l\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.706935 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-utilities\") pod \"redhat-operators-zc49l\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.706957 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-catalog-content\") pod \"redhat-operators-zc49l\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.707482 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-utilities\") pod \"redhat-operators-zc49l\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.707482 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-catalog-content\") pod \"redhat-operators-zc49l\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.753204 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b9qh\" (UniqueName: \"kubernetes.io/projected/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-kube-api-access-9b9qh\") pod \"redhat-operators-zc49l\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:01 crc kubenswrapper[4746]: I1211 10:27:01.764656 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:02 crc kubenswrapper[4746]: I1211 10:27:02.242565 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zc49l"] Dec 11 10:27:02 crc kubenswrapper[4746]: I1211 10:27:02.879902 4746 generic.go:334] "Generic (PLEG): container finished" podID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerID="5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b" exitCode=0 Dec 11 10:27:02 crc kubenswrapper[4746]: I1211 10:27:02.880027 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc49l" event={"ID":"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65","Type":"ContainerDied","Data":"5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b"} Dec 11 10:27:02 crc kubenswrapper[4746]: I1211 10:27:02.880297 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc49l" event={"ID":"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65","Type":"ContainerStarted","Data":"d16986e2db3ec121ed59883a536d969ead9c8dffbc2b3d852b54f14e64b8bd15"} Dec 11 10:27:03 crc kubenswrapper[4746]: I1211 10:27:03.895623 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc49l" event={"ID":"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65","Type":"ContainerStarted","Data":"e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb"} Dec 11 10:27:06 crc kubenswrapper[4746]: I1211 10:27:06.928538 4746 generic.go:334] "Generic (PLEG): container finished" podID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerID="e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb" exitCode=0 Dec 11 10:27:06 crc kubenswrapper[4746]: I1211 10:27:06.929272 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc49l" event={"ID":"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65","Type":"ContainerDied","Data":"e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb"} Dec 11 10:27:07 crc kubenswrapper[4746]: I1211 10:27:07.598942 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-7cc45cfb45-bbbq8" podUID="5c65e9de-7890-47aa-bcf7-48cdfd6dd262" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 11 10:27:08 crc kubenswrapper[4746]: I1211 10:27:08.630710 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:27:08 crc kubenswrapper[4746]: I1211 10:27:08.951907 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"9063986f278767b0acc214606a8b6d52c8f55ba41ce46f6d90a49b5d1b51ce33"} Dec 11 10:27:10 crc kubenswrapper[4746]: I1211 10:27:10.975937 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc49l" event={"ID":"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65","Type":"ContainerStarted","Data":"f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444"} Dec 11 10:27:11 crc kubenswrapper[4746]: I1211 10:27:11.002263 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zc49l" podStartSLOduration=2.470877306 podStartE2EDuration="10.00223619s" podCreationTimestamp="2025-12-11 10:27:01 +0000 UTC" firstStartedPulling="2025-12-11 10:27:02.884443265 +0000 UTC m=+1995.744306578" lastFinishedPulling="2025-12-11 10:27:10.415802149 +0000 UTC m=+2003.275665462" observedRunningTime="2025-12-11 10:27:10.998700774 +0000 UTC m=+2003.858564087" watchObservedRunningTime="2025-12-11 10:27:11.00223619 +0000 UTC m=+2003.862099503" Dec 11 10:27:11 crc kubenswrapper[4746]: I1211 10:27:11.765574 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:11 crc kubenswrapper[4746]: I1211 10:27:11.765928 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:11 crc kubenswrapper[4746]: I1211 10:27:11.989529 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9fa6020-1f64-4cc4-8b95-7372a5ce6f92" containerID="c4c2911819f4ad945ef79c3891be842d24c0d0e9d69b2be973166c1431082504" exitCode=0 Dec 11 10:27:11 crc kubenswrapper[4746]: I1211 10:27:11.989641 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" event={"ID":"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92","Type":"ContainerDied","Data":"c4c2911819f4ad945ef79c3891be842d24c0d0e9d69b2be973166c1431082504"} Dec 11 10:27:12 crc kubenswrapper[4746]: I1211 10:27:12.821739 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zc49l" podUID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerName="registry-server" probeResult="failure" output=< Dec 11 10:27:12 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Dec 11 10:27:12 crc kubenswrapper[4746]: > Dec 11 10:27:13 crc kubenswrapper[4746]: I1211 10:27:13.502958 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:27:13 crc kubenswrapper[4746]: I1211 10:27:13.632228 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sm7kx\" (UniqueName: \"kubernetes.io/projected/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-kube-api-access-sm7kx\") pod \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " Dec 11 10:27:13 crc kubenswrapper[4746]: I1211 10:27:13.632341 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-inventory\") pod \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " Dec 11 10:27:13 crc kubenswrapper[4746]: I1211 10:27:13.632379 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-ssh-key\") pod \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\" (UID: \"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92\") " Dec 11 10:27:13 crc kubenswrapper[4746]: I1211 10:27:13.642314 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-kube-api-access-sm7kx" (OuterVolumeSpecName: "kube-api-access-sm7kx") pod "a9fa6020-1f64-4cc4-8b95-7372a5ce6f92" (UID: "a9fa6020-1f64-4cc4-8b95-7372a5ce6f92"). InnerVolumeSpecName "kube-api-access-sm7kx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:27:13 crc kubenswrapper[4746]: I1211 10:27:13.671344 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-inventory" (OuterVolumeSpecName: "inventory") pod "a9fa6020-1f64-4cc4-8b95-7372a5ce6f92" (UID: "a9fa6020-1f64-4cc4-8b95-7372a5ce6f92"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:13 crc kubenswrapper[4746]: I1211 10:27:13.671479 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a9fa6020-1f64-4cc4-8b95-7372a5ce6f92" (UID: "a9fa6020-1f64-4cc4-8b95-7372a5ce6f92"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:13 crc kubenswrapper[4746]: I1211 10:27:13.739420 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sm7kx\" (UniqueName: \"kubernetes.io/projected/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-kube-api-access-sm7kx\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:13 crc kubenswrapper[4746]: I1211 10:27:13.739462 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:13 crc kubenswrapper[4746]: I1211 10:27:13.739476 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9fa6020-1f64-4cc4-8b95-7372a5ce6f92-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.013247 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" event={"ID":"a9fa6020-1f64-4cc4-8b95-7372a5ce6f92","Type":"ContainerDied","Data":"d16c758b317918af85df579aa521bb30222a8828d9d7044d3d65f1a5f31bf712"} Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.013302 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d16c758b317918af85df579aa521bb30222a8828d9d7044d3d65f1a5f31bf712" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.013376 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.130392 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc"] Dec 11 10:27:14 crc kubenswrapper[4746]: E1211 10:27:14.131148 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fa6020-1f64-4cc4-8b95-7372a5ce6f92" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.131173 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fa6020-1f64-4cc4-8b95-7372a5ce6f92" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.131491 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9fa6020-1f64-4cc4-8b95-7372a5ce6f92" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.132515 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.139108 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.139418 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.139592 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.139770 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.139917 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.140089 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.140247 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.140408 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.144772 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc"] Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.259730 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.260240 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.260483 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.260660 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.260826 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.260973 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.261183 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.261424 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.261643 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.261820 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.261964 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.262182 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.262482 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.262641 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mscn6\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-kube-api-access-mscn6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.365663 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.365821 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.365893 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.365940 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.365964 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.365984 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.366103 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.366141 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mscn6\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-kube-api-access-mscn6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.366203 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.366225 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.366256 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.366291 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.366321 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.366358 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.374229 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.374468 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.374769 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.375231 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.375520 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.376005 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.377952 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.378091 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.378574 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.380169 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.380727 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.381461 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.381528 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.386123 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mscn6\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-kube-api-access-mscn6\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:14 crc kubenswrapper[4746]: I1211 10:27:14.541446 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:15 crc kubenswrapper[4746]: I1211 10:27:15.085763 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc"] Dec 11 10:27:15 crc kubenswrapper[4746]: W1211 10:27:15.095565 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9076913_deed_4328_8c4b_147c3f7bac9a.slice/crio-aafecf59a4750531845ff18895721e088ff58c9b92bbae7e2a92b07d9ef22845 WatchSource:0}: Error finding container aafecf59a4750531845ff18895721e088ff58c9b92bbae7e2a92b07d9ef22845: Status 404 returned error can't find the container with id aafecf59a4750531845ff18895721e088ff58c9b92bbae7e2a92b07d9ef22845 Dec 11 10:27:16 crc kubenswrapper[4746]: I1211 10:27:16.035019 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" event={"ID":"a9076913-deed-4328-8c4b-147c3f7bac9a","Type":"ContainerStarted","Data":"f19ab3d4f7723867ba0668cb34f69ac91f4d6a1e3022c36055780707f8b3962c"} Dec 11 10:27:16 crc kubenswrapper[4746]: I1211 10:27:16.035570 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" event={"ID":"a9076913-deed-4328-8c4b-147c3f7bac9a","Type":"ContainerStarted","Data":"aafecf59a4750531845ff18895721e088ff58c9b92bbae7e2a92b07d9ef22845"} Dec 11 10:27:16 crc kubenswrapper[4746]: I1211 10:27:16.069367 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" podStartSLOduration=1.637334648 podStartE2EDuration="2.069333849s" podCreationTimestamp="2025-12-11 10:27:14 +0000 UTC" firstStartedPulling="2025-12-11 10:27:15.102093495 +0000 UTC m=+2007.961956808" lastFinishedPulling="2025-12-11 10:27:15.534092696 +0000 UTC m=+2008.393956009" observedRunningTime="2025-12-11 10:27:16.055702432 +0000 UTC m=+2008.915565765" watchObservedRunningTime="2025-12-11 10:27:16.069333849 +0000 UTC m=+2008.929197182" Dec 11 10:27:21 crc kubenswrapper[4746]: I1211 10:27:21.815365 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:21 crc kubenswrapper[4746]: I1211 10:27:21.890762 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:22 crc kubenswrapper[4746]: I1211 10:27:22.060926 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zc49l"] Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.099204 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zc49l" podUID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerName="registry-server" containerID="cri-o://f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444" gracePeriod=2 Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.566767 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.591569 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-utilities\") pod \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.591647 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-catalog-content\") pod \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.593292 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-utilities" (OuterVolumeSpecName: "utilities") pod "78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" (UID: "78d8bc3a-8fbc-4f12-8675-c5f4ea437f65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.693337 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b9qh\" (UniqueName: \"kubernetes.io/projected/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-kube-api-access-9b9qh\") pod \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\" (UID: \"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65\") " Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.693871 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.699393 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-kube-api-access-9b9qh" (OuterVolumeSpecName: "kube-api-access-9b9qh") pod "78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" (UID: "78d8bc3a-8fbc-4f12-8675-c5f4ea437f65"). InnerVolumeSpecName "kube-api-access-9b9qh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.713214 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" (UID: "78d8bc3a-8fbc-4f12-8675-c5f4ea437f65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.795607 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b9qh\" (UniqueName: \"kubernetes.io/projected/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-kube-api-access-9b9qh\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:23 crc kubenswrapper[4746]: I1211 10:27:23.795659 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.109652 4746 generic.go:334] "Generic (PLEG): container finished" podID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerID="f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444" exitCode=0 Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.109695 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc49l" event={"ID":"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65","Type":"ContainerDied","Data":"f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444"} Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.109724 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zc49l" event={"ID":"78d8bc3a-8fbc-4f12-8675-c5f4ea437f65","Type":"ContainerDied","Data":"d16986e2db3ec121ed59883a536d969ead9c8dffbc2b3d852b54f14e64b8bd15"} Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.109743 4746 scope.go:117] "RemoveContainer" containerID="f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444" Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.109867 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zc49l" Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.147455 4746 scope.go:117] "RemoveContainer" containerID="e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb" Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.150545 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zc49l"] Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.162969 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zc49l"] Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.176431 4746 scope.go:117] "RemoveContainer" containerID="5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b" Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.217053 4746 scope.go:117] "RemoveContainer" containerID="f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444" Dec 11 10:27:24 crc kubenswrapper[4746]: E1211 10:27:24.217866 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444\": container with ID starting with f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444 not found: ID does not exist" containerID="f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444" Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.217912 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444"} err="failed to get container status \"f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444\": rpc error: code = NotFound desc = could not find container \"f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444\": container with ID starting with f4f157169a3b339c5464307568384a2755730d7ae5541ce6e4c0cd0b155d4444 not found: ID does not exist" Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.217941 4746 scope.go:117] "RemoveContainer" containerID="e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb" Dec 11 10:27:24 crc kubenswrapper[4746]: E1211 10:27:24.218344 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb\": container with ID starting with e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb not found: ID does not exist" containerID="e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb" Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.218365 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb"} err="failed to get container status \"e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb\": rpc error: code = NotFound desc = could not find container \"e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb\": container with ID starting with e655c7629ff8013b9328d62ab19af2cc713880d959cfaaf2fa879b10913fbbbb not found: ID does not exist" Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.218377 4746 scope.go:117] "RemoveContainer" containerID="5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b" Dec 11 10:27:24 crc kubenswrapper[4746]: E1211 10:27:24.218754 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b\": container with ID starting with 5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b not found: ID does not exist" containerID="5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b" Dec 11 10:27:24 crc kubenswrapper[4746]: I1211 10:27:24.218774 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b"} err="failed to get container status \"5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b\": rpc error: code = NotFound desc = could not find container \"5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b\": container with ID starting with 5c6074fddedf039df738943a093a7861a4e1972817e8dfe5417e54e61998799b not found: ID does not exist" Dec 11 10:27:25 crc kubenswrapper[4746]: I1211 10:27:25.644309 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" path="/var/lib/kubelet/pods/78d8bc3a-8fbc-4f12-8675-c5f4ea437f65/volumes" Dec 11 10:27:54 crc kubenswrapper[4746]: I1211 10:27:54.436023 4746 generic.go:334] "Generic (PLEG): container finished" podID="a9076913-deed-4328-8c4b-147c3f7bac9a" containerID="f19ab3d4f7723867ba0668cb34f69ac91f4d6a1e3022c36055780707f8b3962c" exitCode=0 Dec 11 10:27:54 crc kubenswrapper[4746]: I1211 10:27:54.436127 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" event={"ID":"a9076913-deed-4328-8c4b-147c3f7bac9a","Type":"ContainerDied","Data":"f19ab3d4f7723867ba0668cb34f69ac91f4d6a1e3022c36055780707f8b3962c"} Dec 11 10:27:55 crc kubenswrapper[4746]: I1211 10:27:55.955369 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.071249 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ssh-key\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.071311 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-neutron-metadata-combined-ca-bundle\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.071358 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-nova-combined-ca-bundle\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.071399 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-repo-setup-combined-ca-bundle\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.071448 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-libvirt-combined-ca-bundle\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.071470 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.071570 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-bootstrap-combined-ca-bundle\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.071594 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-telemetry-combined-ca-bundle\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.071633 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mscn6\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-kube-api-access-mscn6\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.072558 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.072605 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ovn-combined-ca-bundle\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.072629 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.072702 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.072847 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-inventory\") pod \"a9076913-deed-4328-8c4b-147c3f7bac9a\" (UID: \"a9076913-deed-4328-8c4b-147c3f7bac9a\") " Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.080537 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.080595 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.080779 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.081357 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.085146 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-kube-api-access-mscn6" (OuterVolumeSpecName: "kube-api-access-mscn6") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "kube-api-access-mscn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.085285 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.086514 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.095654 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.095654 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.095784 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.096466 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.096751 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.118091 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.118777 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-inventory" (OuterVolumeSpecName: "inventory") pod "a9076913-deed-4328-8c4b-147c3f7bac9a" (UID: "a9076913-deed-4328-8c4b-147c3f7bac9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176228 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176297 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176313 4746 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176329 4746 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176344 4746 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176359 4746 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176374 4746 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176390 4746 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176403 4746 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176415 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mscn6\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-kube-api-access-mscn6\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176428 4746 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176445 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9076913-deed-4328-8c4b-147c3f7bac9a-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176457 4746 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.176471 4746 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a9076913-deed-4328-8c4b-147c3f7bac9a-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.463999 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" event={"ID":"a9076913-deed-4328-8c4b-147c3f7bac9a","Type":"ContainerDied","Data":"aafecf59a4750531845ff18895721e088ff58c9b92bbae7e2a92b07d9ef22845"} Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.464073 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aafecf59a4750531845ff18895721e088ff58c9b92bbae7e2a92b07d9ef22845" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.464165 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.589038 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9"] Dec 11 10:27:56 crc kubenswrapper[4746]: E1211 10:27:56.590670 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerName="registry-server" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.590709 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerName="registry-server" Dec 11 10:27:56 crc kubenswrapper[4746]: E1211 10:27:56.590753 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerName="extract-content" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.590764 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerName="extract-content" Dec 11 10:27:56 crc kubenswrapper[4746]: E1211 10:27:56.590778 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9076913-deed-4328-8c4b-147c3f7bac9a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.590792 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9076913-deed-4328-8c4b-147c3f7bac9a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 11 10:27:56 crc kubenswrapper[4746]: E1211 10:27:56.590813 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerName="extract-utilities" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.590822 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerName="extract-utilities" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.591666 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="78d8bc3a-8fbc-4f12-8675-c5f4ea437f65" containerName="registry-server" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.591707 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9076913-deed-4328-8c4b-147c3f7bac9a" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.592881 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.596931 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.599079 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.600453 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.600648 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.601426 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.610516 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9"] Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.792001 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.792250 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.792369 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.792485 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zh7k\" (UniqueName: \"kubernetes.io/projected/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-kube-api-access-5zh7k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.792515 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.894836 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zh7k\" (UniqueName: \"kubernetes.io/projected/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-kube-api-access-5zh7k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.894914 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.895029 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.895068 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.895100 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.896083 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.902977 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.903970 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.903985 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.919862 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zh7k\" (UniqueName: \"kubernetes.io/projected/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-kube-api-access-5zh7k\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-tn2d9\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:56 crc kubenswrapper[4746]: I1211 10:27:56.920584 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:27:57 crc kubenswrapper[4746]: I1211 10:27:57.500660 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9"] Dec 11 10:27:58 crc kubenswrapper[4746]: I1211 10:27:58.488622 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" event={"ID":"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8","Type":"ContainerStarted","Data":"3084deada43c6e9eb1f05e8103d5f0c2c3e394bc506a688efe09edab12fbae7c"} Dec 11 10:27:59 crc kubenswrapper[4746]: I1211 10:27:59.501613 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" event={"ID":"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8","Type":"ContainerStarted","Data":"3ebd992fe286848f389b7e754740231f73389fc3244e0624ad32e288f9ad9262"} Dec 11 10:27:59 crc kubenswrapper[4746]: I1211 10:27:59.531709 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" podStartSLOduration=2.735855802 podStartE2EDuration="3.53166794s" podCreationTimestamp="2025-12-11 10:27:56 +0000 UTC" firstStartedPulling="2025-12-11 10:27:57.512376777 +0000 UTC m=+2050.372240090" lastFinishedPulling="2025-12-11 10:27:58.308188925 +0000 UTC m=+2051.168052228" observedRunningTime="2025-12-11 10:27:59.522186404 +0000 UTC m=+2052.382049727" watchObservedRunningTime="2025-12-11 10:27:59.53166794 +0000 UTC m=+2052.391531253" Dec 11 10:29:01 crc kubenswrapper[4746]: I1211 10:29:01.276398 4746 generic.go:334] "Generic (PLEG): container finished" podID="a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8" containerID="3ebd992fe286848f389b7e754740231f73389fc3244e0624ad32e288f9ad9262" exitCode=0 Dec 11 10:29:01 crc kubenswrapper[4746]: I1211 10:29:01.276499 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" event={"ID":"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8","Type":"ContainerDied","Data":"3ebd992fe286848f389b7e754740231f73389fc3244e0624ad32e288f9ad9262"} Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.755503 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.872591 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-inventory\") pod \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.872719 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovn-combined-ca-bundle\") pod \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.872742 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ssh-key\") pod \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.872800 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zh7k\" (UniqueName: \"kubernetes.io/projected/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-kube-api-access-5zh7k\") pod \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.872885 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovncontroller-config-0\") pod \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\" (UID: \"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8\") " Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.880681 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8" (UID: "a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.883160 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-kube-api-access-5zh7k" (OuterVolumeSpecName: "kube-api-access-5zh7k") pod "a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8" (UID: "a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8"). InnerVolumeSpecName "kube-api-access-5zh7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.905300 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8" (UID: "a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.916003 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8" (UID: "a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.921809 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-inventory" (OuterVolumeSpecName: "inventory") pod "a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8" (UID: "a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.976554 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.976590 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.976603 4746 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.976615 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zh7k\" (UniqueName: \"kubernetes.io/projected/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-kube-api-access-5zh7k\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:02 crc kubenswrapper[4746]: I1211 10:29:02.976624 4746 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.310342 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" event={"ID":"a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8","Type":"ContainerDied","Data":"3084deada43c6e9eb1f05e8103d5f0c2c3e394bc506a688efe09edab12fbae7c"} Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.310400 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3084deada43c6e9eb1f05e8103d5f0c2c3e394bc506a688efe09edab12fbae7c" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.310473 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-tn2d9" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.588076 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw"] Dec 11 10:29:03 crc kubenswrapper[4746]: E1211 10:29:03.589229 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.589332 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.589723 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.590840 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.594182 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.594402 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.594756 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.595025 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.595033 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.595178 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.600395 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw"] Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.695994 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.696795 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.697137 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.697300 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.697390 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.697411 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq6xs\" (UniqueName: \"kubernetes.io/projected/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-kube-api-access-nq6xs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.799858 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.799937 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq6xs\" (UniqueName: \"kubernetes.io/projected/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-kube-api-access-nq6xs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.800099 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.800122 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.800205 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.800260 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.812829 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.814283 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.824287 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.836992 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.837787 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.859442 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq6xs\" (UniqueName: \"kubernetes.io/projected/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-kube-api-access-nq6xs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:03 crc kubenswrapper[4746]: I1211 10:29:03.942955 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:04 crc kubenswrapper[4746]: I1211 10:29:04.636710 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw"] Dec 11 10:29:05 crc kubenswrapper[4746]: I1211 10:29:05.332408 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" event={"ID":"e1498cd2-84fe-4769-8fc5-ffe9f8e32251","Type":"ContainerStarted","Data":"c431c8298b3af3a782fef7c9456b87b26f7ff2f24f91fcef8ee6971848f9ebf7"} Dec 11 10:29:06 crc kubenswrapper[4746]: I1211 10:29:06.346873 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" event={"ID":"e1498cd2-84fe-4769-8fc5-ffe9f8e32251","Type":"ContainerStarted","Data":"3cc06c1740145c81b01b1b5e729f945aa17609e617d3bd39e84acf29c197d6b0"} Dec 11 10:29:06 crc kubenswrapper[4746]: I1211 10:29:06.374992 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" podStartSLOduration=2.371738577 podStartE2EDuration="3.374962839s" podCreationTimestamp="2025-12-11 10:29:03 +0000 UTC" firstStartedPulling="2025-12-11 10:29:04.648021839 +0000 UTC m=+2117.507885142" lastFinishedPulling="2025-12-11 10:29:05.651246061 +0000 UTC m=+2118.511109404" observedRunningTime="2025-12-11 10:29:06.364871347 +0000 UTC m=+2119.224734740" watchObservedRunningTime="2025-12-11 10:29:06.374962839 +0000 UTC m=+2119.234826152" Dec 11 10:29:29 crc kubenswrapper[4746]: I1211 10:29:29.877896 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:29:29 crc kubenswrapper[4746]: I1211 10:29:29.878699 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:29:55 crc kubenswrapper[4746]: I1211 10:29:55.935472 4746 generic.go:334] "Generic (PLEG): container finished" podID="e1498cd2-84fe-4769-8fc5-ffe9f8e32251" containerID="3cc06c1740145c81b01b1b5e729f945aa17609e617d3bd39e84acf29c197d6b0" exitCode=0 Dec 11 10:29:55 crc kubenswrapper[4746]: I1211 10:29:55.935578 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" event={"ID":"e1498cd2-84fe-4769-8fc5-ffe9f8e32251","Type":"ContainerDied","Data":"3cc06c1740145c81b01b1b5e729f945aa17609e617d3bd39e84acf29c197d6b0"} Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.366589 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.453031 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-nova-metadata-neutron-config-0\") pod \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.453177 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-ovn-metadata-agent-neutron-config-0\") pod \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.453257 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-ssh-key\") pod \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.453388 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-metadata-combined-ca-bundle\") pod \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.453601 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-inventory\") pod \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.453646 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq6xs\" (UniqueName: \"kubernetes.io/projected/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-kube-api-access-nq6xs\") pod \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\" (UID: \"e1498cd2-84fe-4769-8fc5-ffe9f8e32251\") " Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.459327 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e1498cd2-84fe-4769-8fc5-ffe9f8e32251" (UID: "e1498cd2-84fe-4769-8fc5-ffe9f8e32251"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.470279 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-kube-api-access-nq6xs" (OuterVolumeSpecName: "kube-api-access-nq6xs") pod "e1498cd2-84fe-4769-8fc5-ffe9f8e32251" (UID: "e1498cd2-84fe-4769-8fc5-ffe9f8e32251"). InnerVolumeSpecName "kube-api-access-nq6xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.496488 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e1498cd2-84fe-4769-8fc5-ffe9f8e32251" (UID: "e1498cd2-84fe-4769-8fc5-ffe9f8e32251"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.496429 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "e1498cd2-84fe-4769-8fc5-ffe9f8e32251" (UID: "e1498cd2-84fe-4769-8fc5-ffe9f8e32251"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.496500 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "e1498cd2-84fe-4769-8fc5-ffe9f8e32251" (UID: "e1498cd2-84fe-4769-8fc5-ffe9f8e32251"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.505854 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-inventory" (OuterVolumeSpecName: "inventory") pod "e1498cd2-84fe-4769-8fc5-ffe9f8e32251" (UID: "e1498cd2-84fe-4769-8fc5-ffe9f8e32251"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.556217 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.556262 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq6xs\" (UniqueName: \"kubernetes.io/projected/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-kube-api-access-nq6xs\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.556277 4746 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.556294 4746 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.556307 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.556324 4746 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1498cd2-84fe-4769-8fc5-ffe9f8e32251-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.961135 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" event={"ID":"e1498cd2-84fe-4769-8fc5-ffe9f8e32251","Type":"ContainerDied","Data":"c431c8298b3af3a782fef7c9456b87b26f7ff2f24f91fcef8ee6971848f9ebf7"} Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.961191 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c431c8298b3af3a782fef7c9456b87b26f7ff2f24f91fcef8ee6971848f9ebf7" Dec 11 10:29:57 crc kubenswrapper[4746]: I1211 10:29:57.961190 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.079612 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t"] Dec 11 10:29:58 crc kubenswrapper[4746]: E1211 10:29:58.081803 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1498cd2-84fe-4769-8fc5-ffe9f8e32251" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.081973 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1498cd2-84fe-4769-8fc5-ffe9f8e32251" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.082354 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1498cd2-84fe-4769-8fc5-ffe9f8e32251" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.083552 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.086442 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.086556 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.086694 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.086742 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.086965 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.098773 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t"] Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.171215 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt27q\" (UniqueName: \"kubernetes.io/projected/1424dbeb-f9d9-48d1-8b92-14828c8ea326-kube-api-access-wt27q\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.171291 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.171467 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.172132 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.172267 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.274511 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.274625 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.274701 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt27q\" (UniqueName: \"kubernetes.io/projected/1424dbeb-f9d9-48d1-8b92-14828c8ea326-kube-api-access-wt27q\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.274762 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.274837 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.280498 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.281656 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.282728 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.284906 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.297520 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt27q\" (UniqueName: \"kubernetes.io/projected/1424dbeb-f9d9-48d1-8b92-14828c8ea326-kube-api-access-wt27q\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-85b2t\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.403685 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.945376 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t"] Dec 11 10:29:58 crc kubenswrapper[4746]: I1211 10:29:58.994097 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" event={"ID":"1424dbeb-f9d9-48d1-8b92-14828c8ea326","Type":"ContainerStarted","Data":"912e42e96b8e18e400e99a809bfb47e3744dbc46e7b33a678415ea64f24793d6"} Dec 11 10:29:59 crc kubenswrapper[4746]: I1211 10:29:59.877702 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:29:59 crc kubenswrapper[4746]: I1211 10:29:59.878325 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.004526 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" event={"ID":"1424dbeb-f9d9-48d1-8b92-14828c8ea326","Type":"ContainerStarted","Data":"e8bd4e9b898aba2340c9d848c147a9846133e60703b0aef64026bc798cac48a7"} Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.140512 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" podStartSLOduration=1.617918116 podStartE2EDuration="2.140477686s" podCreationTimestamp="2025-12-11 10:29:58 +0000 UTC" firstStartedPulling="2025-12-11 10:29:58.960144351 +0000 UTC m=+2171.820007674" lastFinishedPulling="2025-12-11 10:29:59.482703931 +0000 UTC m=+2172.342567244" observedRunningTime="2025-12-11 10:30:00.029936677 +0000 UTC m=+2172.889800050" watchObservedRunningTime="2025-12-11 10:30:00.140477686 +0000 UTC m=+2173.000340999" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.142929 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7"] Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.153522 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.157592 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.157613 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.158611 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7"] Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.331742 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz2xt\" (UniqueName: \"kubernetes.io/projected/49debe6e-f178-4960-acdd-eee3d0d075bc-kube-api-access-fz2xt\") pod \"collect-profiles-29424150-bpsn7\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.331939 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49debe6e-f178-4960-acdd-eee3d0d075bc-config-volume\") pod \"collect-profiles-29424150-bpsn7\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.332373 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49debe6e-f178-4960-acdd-eee3d0d075bc-secret-volume\") pod \"collect-profiles-29424150-bpsn7\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.436189 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49debe6e-f178-4960-acdd-eee3d0d075bc-config-volume\") pod \"collect-profiles-29424150-bpsn7\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.436443 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49debe6e-f178-4960-acdd-eee3d0d075bc-secret-volume\") pod \"collect-profiles-29424150-bpsn7\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.436582 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz2xt\" (UniqueName: \"kubernetes.io/projected/49debe6e-f178-4960-acdd-eee3d0d075bc-kube-api-access-fz2xt\") pod \"collect-profiles-29424150-bpsn7\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.437368 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49debe6e-f178-4960-acdd-eee3d0d075bc-config-volume\") pod \"collect-profiles-29424150-bpsn7\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.448760 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49debe6e-f178-4960-acdd-eee3d0d075bc-secret-volume\") pod \"collect-profiles-29424150-bpsn7\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.458936 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz2xt\" (UniqueName: \"kubernetes.io/projected/49debe6e-f178-4960-acdd-eee3d0d075bc-kube-api-access-fz2xt\") pod \"collect-profiles-29424150-bpsn7\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:00 crc kubenswrapper[4746]: I1211 10:30:00.557149 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:01 crc kubenswrapper[4746]: I1211 10:30:01.127826 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7"] Dec 11 10:30:02 crc kubenswrapper[4746]: I1211 10:30:02.050829 4746 generic.go:334] "Generic (PLEG): container finished" podID="49debe6e-f178-4960-acdd-eee3d0d075bc" containerID="bf358addc46aa2b75eae0bf58101b4446cc92c7c9e9d4892a25a4b959d90a463" exitCode=0 Dec 11 10:30:02 crc kubenswrapper[4746]: I1211 10:30:02.051240 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" event={"ID":"49debe6e-f178-4960-acdd-eee3d0d075bc","Type":"ContainerDied","Data":"bf358addc46aa2b75eae0bf58101b4446cc92c7c9e9d4892a25a4b959d90a463"} Dec 11 10:30:02 crc kubenswrapper[4746]: I1211 10:30:02.051276 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" event={"ID":"49debe6e-f178-4960-acdd-eee3d0d075bc","Type":"ContainerStarted","Data":"14097249afb263bcfd6f55c39bd1404313c5cff945b1c00e64c8773ad8840dc0"} Dec 11 10:30:03 crc kubenswrapper[4746]: I1211 10:30:03.427889 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:03 crc kubenswrapper[4746]: I1211 10:30:03.610151 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49debe6e-f178-4960-acdd-eee3d0d075bc-secret-volume\") pod \"49debe6e-f178-4960-acdd-eee3d0d075bc\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " Dec 11 10:30:03 crc kubenswrapper[4746]: I1211 10:30:03.610368 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49debe6e-f178-4960-acdd-eee3d0d075bc-config-volume\") pod \"49debe6e-f178-4960-acdd-eee3d0d075bc\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " Dec 11 10:30:03 crc kubenswrapper[4746]: I1211 10:30:03.610449 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz2xt\" (UniqueName: \"kubernetes.io/projected/49debe6e-f178-4960-acdd-eee3d0d075bc-kube-api-access-fz2xt\") pod \"49debe6e-f178-4960-acdd-eee3d0d075bc\" (UID: \"49debe6e-f178-4960-acdd-eee3d0d075bc\") " Dec 11 10:30:03 crc kubenswrapper[4746]: I1211 10:30:03.611178 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49debe6e-f178-4960-acdd-eee3d0d075bc-config-volume" (OuterVolumeSpecName: "config-volume") pod "49debe6e-f178-4960-acdd-eee3d0d075bc" (UID: "49debe6e-f178-4960-acdd-eee3d0d075bc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:30:03 crc kubenswrapper[4746]: I1211 10:30:03.611521 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/49debe6e-f178-4960-acdd-eee3d0d075bc-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:30:03 crc kubenswrapper[4746]: I1211 10:30:03.618593 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49debe6e-f178-4960-acdd-eee3d0d075bc-kube-api-access-fz2xt" (OuterVolumeSpecName: "kube-api-access-fz2xt") pod "49debe6e-f178-4960-acdd-eee3d0d075bc" (UID: "49debe6e-f178-4960-acdd-eee3d0d075bc"). InnerVolumeSpecName "kube-api-access-fz2xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:30:03 crc kubenswrapper[4746]: I1211 10:30:03.619657 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49debe6e-f178-4960-acdd-eee3d0d075bc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "49debe6e-f178-4960-acdd-eee3d0d075bc" (UID: "49debe6e-f178-4960-acdd-eee3d0d075bc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:30:03 crc kubenswrapper[4746]: I1211 10:30:03.713954 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz2xt\" (UniqueName: \"kubernetes.io/projected/49debe6e-f178-4960-acdd-eee3d0d075bc-kube-api-access-fz2xt\") on node \"crc\" DevicePath \"\"" Dec 11 10:30:03 crc kubenswrapper[4746]: I1211 10:30:03.714005 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/49debe6e-f178-4960-acdd-eee3d0d075bc-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:30:04 crc kubenswrapper[4746]: I1211 10:30:04.085374 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" event={"ID":"49debe6e-f178-4960-acdd-eee3d0d075bc","Type":"ContainerDied","Data":"14097249afb263bcfd6f55c39bd1404313c5cff945b1c00e64c8773ad8840dc0"} Dec 11 10:30:04 crc kubenswrapper[4746]: I1211 10:30:04.085430 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14097249afb263bcfd6f55c39bd1404313c5cff945b1c00e64c8773ad8840dc0" Dec 11 10:30:04 crc kubenswrapper[4746]: I1211 10:30:04.085456 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424150-bpsn7" Dec 11 10:30:04 crc kubenswrapper[4746]: I1211 10:30:04.520857 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h"] Dec 11 10:30:04 crc kubenswrapper[4746]: I1211 10:30:04.531210 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424105-lrv4h"] Dec 11 10:30:05 crc kubenswrapper[4746]: I1211 10:30:05.646327 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9c295d4-1896-4e5e-989e-2d0a3eb9b07e" path="/var/lib/kubelet/pods/d9c295d4-1896-4e5e-989e-2d0a3eb9b07e/volumes" Dec 11 10:30:29 crc kubenswrapper[4746]: I1211 10:30:29.877093 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:30:29 crc kubenswrapper[4746]: I1211 10:30:29.877834 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:30:29 crc kubenswrapper[4746]: I1211 10:30:29.877883 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:30:29 crc kubenswrapper[4746]: I1211 10:30:29.878947 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9063986f278767b0acc214606a8b6d52c8f55ba41ce46f6d90a49b5d1b51ce33"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:30:29 crc kubenswrapper[4746]: I1211 10:30:29.879019 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://9063986f278767b0acc214606a8b6d52c8f55ba41ce46f6d90a49b5d1b51ce33" gracePeriod=600 Dec 11 10:30:30 crc kubenswrapper[4746]: I1211 10:30:30.380063 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="9063986f278767b0acc214606a8b6d52c8f55ba41ce46f6d90a49b5d1b51ce33" exitCode=0 Dec 11 10:30:30 crc kubenswrapper[4746]: I1211 10:30:30.380519 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"9063986f278767b0acc214606a8b6d52c8f55ba41ce46f6d90a49b5d1b51ce33"} Dec 11 10:30:30 crc kubenswrapper[4746]: I1211 10:30:30.380578 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806"} Dec 11 10:30:30 crc kubenswrapper[4746]: I1211 10:30:30.380597 4746 scope.go:117] "RemoveContainer" containerID="7e053a89dee5f4aee7af810d7c936d4e26bb83e32637133856706f572240f0bb" Dec 11 10:31:04 crc kubenswrapper[4746]: I1211 10:31:04.378164 4746 scope.go:117] "RemoveContainer" containerID="be661d318b8360bf8675bf87e0885727febb81c4d6a663a4d5a6e0a298e257bc" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.538179 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j8dqp"] Dec 11 10:31:09 crc kubenswrapper[4746]: E1211 10:31:09.541202 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49debe6e-f178-4960-acdd-eee3d0d075bc" containerName="collect-profiles" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.541318 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="49debe6e-f178-4960-acdd-eee3d0d075bc" containerName="collect-profiles" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.541699 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="49debe6e-f178-4960-acdd-eee3d0d075bc" containerName="collect-profiles" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.543770 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.551617 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8dqp"] Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.701811 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-utilities\") pod \"certified-operators-j8dqp\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.701900 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qz8h\" (UniqueName: \"kubernetes.io/projected/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-kube-api-access-6qz8h\") pod \"certified-operators-j8dqp\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.702131 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-catalog-content\") pod \"certified-operators-j8dqp\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.805780 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-catalog-content\") pod \"certified-operators-j8dqp\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.805890 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-utilities\") pod \"certified-operators-j8dqp\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.805972 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qz8h\" (UniqueName: \"kubernetes.io/projected/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-kube-api-access-6qz8h\") pod \"certified-operators-j8dqp\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.806669 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-catalog-content\") pod \"certified-operators-j8dqp\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.806695 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-utilities\") pod \"certified-operators-j8dqp\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.833249 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qz8h\" (UniqueName: \"kubernetes.io/projected/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-kube-api-access-6qz8h\") pod \"certified-operators-j8dqp\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:09 crc kubenswrapper[4746]: I1211 10:31:09.883699 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:10 crc kubenswrapper[4746]: I1211 10:31:10.536231 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j8dqp"] Dec 11 10:31:10 crc kubenswrapper[4746]: W1211 10:31:10.552989 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05b30f79_c4a3_4e93_bcd9_0ae1e6297493.slice/crio-3d54cc93bb0a65bad311b1180e5fe2a20fa049b9a1ea01b4e43d27ec89f2c257 WatchSource:0}: Error finding container 3d54cc93bb0a65bad311b1180e5fe2a20fa049b9a1ea01b4e43d27ec89f2c257: Status 404 returned error can't find the container with id 3d54cc93bb0a65bad311b1180e5fe2a20fa049b9a1ea01b4e43d27ec89f2c257 Dec 11 10:31:10 crc kubenswrapper[4746]: I1211 10:31:10.850968 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8dqp" event={"ID":"05b30f79-c4a3-4e93-bcd9-0ae1e6297493","Type":"ContainerStarted","Data":"3d54cc93bb0a65bad311b1180e5fe2a20fa049b9a1ea01b4e43d27ec89f2c257"} Dec 11 10:31:11 crc kubenswrapper[4746]: I1211 10:31:11.863164 4746 generic.go:334] "Generic (PLEG): container finished" podID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerID="7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab" exitCode=0 Dec 11 10:31:11 crc kubenswrapper[4746]: I1211 10:31:11.863279 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8dqp" event={"ID":"05b30f79-c4a3-4e93-bcd9-0ae1e6297493","Type":"ContainerDied","Data":"7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab"} Dec 11 10:31:11 crc kubenswrapper[4746]: I1211 10:31:11.866268 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.340628 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-s69t4"] Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.344475 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.350286 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s69t4"] Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.470823 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9svgv\" (UniqueName: \"kubernetes.io/projected/0ebff108-f640-4d73-a081-a3a3a73e8c20-kube-api-access-9svgv\") pod \"community-operators-s69t4\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.470964 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-catalog-content\") pod \"community-operators-s69t4\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.471137 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-utilities\") pod \"community-operators-s69t4\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.573723 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-catalog-content\") pod \"community-operators-s69t4\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.573804 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-utilities\") pod \"community-operators-s69t4\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.573939 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9svgv\" (UniqueName: \"kubernetes.io/projected/0ebff108-f640-4d73-a081-a3a3a73e8c20-kube-api-access-9svgv\") pod \"community-operators-s69t4\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.574893 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-catalog-content\") pod \"community-operators-s69t4\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.575350 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-utilities\") pod \"community-operators-s69t4\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.598455 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9svgv\" (UniqueName: \"kubernetes.io/projected/0ebff108-f640-4d73-a081-a3a3a73e8c20-kube-api-access-9svgv\") pod \"community-operators-s69t4\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:12 crc kubenswrapper[4746]: I1211 10:31:12.679298 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:13 crc kubenswrapper[4746]: I1211 10:31:13.204621 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-s69t4"] Dec 11 10:31:13 crc kubenswrapper[4746]: I1211 10:31:13.892367 4746 generic.go:334] "Generic (PLEG): container finished" podID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerID="fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b" exitCode=0 Dec 11 10:31:13 crc kubenswrapper[4746]: I1211 10:31:13.892484 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s69t4" event={"ID":"0ebff108-f640-4d73-a081-a3a3a73e8c20","Type":"ContainerDied","Data":"fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b"} Dec 11 10:31:13 crc kubenswrapper[4746]: I1211 10:31:13.893120 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s69t4" event={"ID":"0ebff108-f640-4d73-a081-a3a3a73e8c20","Type":"ContainerStarted","Data":"37d1fa9156d246c70ead77e71a596ede8c7e95c477900e1abb2f265cefb9f15b"} Dec 11 10:31:18 crc kubenswrapper[4746]: I1211 10:31:18.955631 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s69t4" event={"ID":"0ebff108-f640-4d73-a081-a3a3a73e8c20","Type":"ContainerStarted","Data":"e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0"} Dec 11 10:31:18 crc kubenswrapper[4746]: I1211 10:31:18.959001 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8dqp" event={"ID":"05b30f79-c4a3-4e93-bcd9-0ae1e6297493","Type":"ContainerStarted","Data":"878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12"} Dec 11 10:31:20 crc kubenswrapper[4746]: I1211 10:31:20.986920 4746 generic.go:334] "Generic (PLEG): container finished" podID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerID="878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12" exitCode=0 Dec 11 10:31:20 crc kubenswrapper[4746]: I1211 10:31:20.987017 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8dqp" event={"ID":"05b30f79-c4a3-4e93-bcd9-0ae1e6297493","Type":"ContainerDied","Data":"878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12"} Dec 11 10:31:22 crc kubenswrapper[4746]: I1211 10:31:22.001533 4746 generic.go:334] "Generic (PLEG): container finished" podID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerID="e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0" exitCode=0 Dec 11 10:31:22 crc kubenswrapper[4746]: I1211 10:31:22.001588 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s69t4" event={"ID":"0ebff108-f640-4d73-a081-a3a3a73e8c20","Type":"ContainerDied","Data":"e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0"} Dec 11 10:31:22 crc kubenswrapper[4746]: I1211 10:31:22.006096 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8dqp" event={"ID":"05b30f79-c4a3-4e93-bcd9-0ae1e6297493","Type":"ContainerStarted","Data":"db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3"} Dec 11 10:31:22 crc kubenswrapper[4746]: I1211 10:31:22.052643 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j8dqp" podStartSLOduration=3.365247298 podStartE2EDuration="13.052612504s" podCreationTimestamp="2025-12-11 10:31:09 +0000 UTC" firstStartedPulling="2025-12-11 10:31:11.865979474 +0000 UTC m=+2244.725842787" lastFinishedPulling="2025-12-11 10:31:21.55334468 +0000 UTC m=+2254.413207993" observedRunningTime="2025-12-11 10:31:22.050235529 +0000 UTC m=+2254.910098852" watchObservedRunningTime="2025-12-11 10:31:22.052612504 +0000 UTC m=+2254.912475817" Dec 11 10:31:23 crc kubenswrapper[4746]: I1211 10:31:23.023003 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s69t4" event={"ID":"0ebff108-f640-4d73-a081-a3a3a73e8c20","Type":"ContainerStarted","Data":"5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff"} Dec 11 10:31:23 crc kubenswrapper[4746]: I1211 10:31:23.060542 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-s69t4" podStartSLOduration=2.340084393 podStartE2EDuration="11.060511963s" podCreationTimestamp="2025-12-11 10:31:12 +0000 UTC" firstStartedPulling="2025-12-11 10:31:13.895456341 +0000 UTC m=+2246.755319664" lastFinishedPulling="2025-12-11 10:31:22.615883921 +0000 UTC m=+2255.475747234" observedRunningTime="2025-12-11 10:31:23.052166328 +0000 UTC m=+2255.912029641" watchObservedRunningTime="2025-12-11 10:31:23.060511963 +0000 UTC m=+2255.920375276" Dec 11 10:31:29 crc kubenswrapper[4746]: I1211 10:31:29.885250 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:29 crc kubenswrapper[4746]: I1211 10:31:29.886115 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:29 crc kubenswrapper[4746]: I1211 10:31:29.931020 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:30 crc kubenswrapper[4746]: I1211 10:31:30.142603 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:30 crc kubenswrapper[4746]: I1211 10:31:30.201516 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8dqp"] Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.114091 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j8dqp" podUID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerName="registry-server" containerID="cri-o://db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3" gracePeriod=2 Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.599909 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.679653 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.680350 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.731654 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.764198 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qz8h\" (UniqueName: \"kubernetes.io/projected/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-kube-api-access-6qz8h\") pod \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.764804 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-catalog-content\") pod \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.764994 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-utilities\") pod \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\" (UID: \"05b30f79-c4a3-4e93-bcd9-0ae1e6297493\") " Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.765837 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-utilities" (OuterVolumeSpecName: "utilities") pod "05b30f79-c4a3-4e93-bcd9-0ae1e6297493" (UID: "05b30f79-c4a3-4e93-bcd9-0ae1e6297493"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.771255 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-kube-api-access-6qz8h" (OuterVolumeSpecName: "kube-api-access-6qz8h") pod "05b30f79-c4a3-4e93-bcd9-0ae1e6297493" (UID: "05b30f79-c4a3-4e93-bcd9-0ae1e6297493"). InnerVolumeSpecName "kube-api-access-6qz8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.823839 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05b30f79-c4a3-4e93-bcd9-0ae1e6297493" (UID: "05b30f79-c4a3-4e93-bcd9-0ae1e6297493"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.867568 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.867613 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:31:32 crc kubenswrapper[4746]: I1211 10:31:32.867629 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qz8h\" (UniqueName: \"kubernetes.io/projected/05b30f79-c4a3-4e93-bcd9-0ae1e6297493-kube-api-access-6qz8h\") on node \"crc\" DevicePath \"\"" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.124809 4746 generic.go:334] "Generic (PLEG): container finished" podID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerID="db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3" exitCode=0 Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.124885 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8dqp" event={"ID":"05b30f79-c4a3-4e93-bcd9-0ae1e6297493","Type":"ContainerDied","Data":"db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3"} Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.125201 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j8dqp" event={"ID":"05b30f79-c4a3-4e93-bcd9-0ae1e6297493","Type":"ContainerDied","Data":"3d54cc93bb0a65bad311b1180e5fe2a20fa049b9a1ea01b4e43d27ec89f2c257"} Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.125224 4746 scope.go:117] "RemoveContainer" containerID="db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.124905 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j8dqp" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.162503 4746 scope.go:117] "RemoveContainer" containerID="878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.164630 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j8dqp"] Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.174386 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j8dqp"] Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.194632 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.202693 4746 scope.go:117] "RemoveContainer" containerID="7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.238815 4746 scope.go:117] "RemoveContainer" containerID="db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3" Dec 11 10:31:33 crc kubenswrapper[4746]: E1211 10:31:33.240232 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3\": container with ID starting with db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3 not found: ID does not exist" containerID="db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.240281 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3"} err="failed to get container status \"db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3\": rpc error: code = NotFound desc = could not find container \"db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3\": container with ID starting with db12e846b5f72cbb60ef79f90c5b8a737640de1bc7d06a80cb03555ccebfc0c3 not found: ID does not exist" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.240309 4746 scope.go:117] "RemoveContainer" containerID="878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12" Dec 11 10:31:33 crc kubenswrapper[4746]: E1211 10:31:33.240691 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12\": container with ID starting with 878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12 not found: ID does not exist" containerID="878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.240720 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12"} err="failed to get container status \"878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12\": rpc error: code = NotFound desc = could not find container \"878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12\": container with ID starting with 878959239fab4beb13833357c20d9d798f708edfe18138be0ed311f80d07ca12 not found: ID does not exist" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.240740 4746 scope.go:117] "RemoveContainer" containerID="7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab" Dec 11 10:31:33 crc kubenswrapper[4746]: E1211 10:31:33.240946 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab\": container with ID starting with 7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab not found: ID does not exist" containerID="7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.240978 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab"} err="failed to get container status \"7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab\": rpc error: code = NotFound desc = could not find container \"7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab\": container with ID starting with 7218e2a7d0524ab99fc280d9f627a19a9fb8ca3ff893cb64a655991b3bb7ecab not found: ID does not exist" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.648169 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" path="/var/lib/kubelet/pods/05b30f79-c4a3-4e93-bcd9-0ae1e6297493/volumes" Dec 11 10:31:33 crc kubenswrapper[4746]: I1211 10:31:33.975529 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s69t4"] Dec 11 10:31:35 crc kubenswrapper[4746]: I1211 10:31:35.172989 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-s69t4" podUID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerName="registry-server" containerID="cri-o://5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff" gracePeriod=2 Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.161461 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.186616 4746 generic.go:334] "Generic (PLEG): container finished" podID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerID="5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff" exitCode=0 Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.186870 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s69t4" event={"ID":"0ebff108-f640-4d73-a081-a3a3a73e8c20","Type":"ContainerDied","Data":"5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff"} Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.186921 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-s69t4" event={"ID":"0ebff108-f640-4d73-a081-a3a3a73e8c20","Type":"ContainerDied","Data":"37d1fa9156d246c70ead77e71a596ede8c7e95c477900e1abb2f265cefb9f15b"} Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.186951 4746 scope.go:117] "RemoveContainer" containerID="5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.187202 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-s69t4" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.219752 4746 scope.go:117] "RemoveContainer" containerID="e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.263772 4746 scope.go:117] "RemoveContainer" containerID="fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.308101 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9svgv\" (UniqueName: \"kubernetes.io/projected/0ebff108-f640-4d73-a081-a3a3a73e8c20-kube-api-access-9svgv\") pod \"0ebff108-f640-4d73-a081-a3a3a73e8c20\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.308421 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-catalog-content\") pod \"0ebff108-f640-4d73-a081-a3a3a73e8c20\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.310395 4746 scope.go:117] "RemoveContainer" containerID="5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff" Dec 11 10:31:36 crc kubenswrapper[4746]: E1211 10:31:36.310822 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff\": container with ID starting with 5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff not found: ID does not exist" containerID="5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.310856 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff"} err="failed to get container status \"5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff\": rpc error: code = NotFound desc = could not find container \"5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff\": container with ID starting with 5c89159c83a8fa0dd7f4da621572161f83b08d26a4c63e018978a2bbfbed65ff not found: ID does not exist" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.310889 4746 scope.go:117] "RemoveContainer" containerID="e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0" Dec 11 10:31:36 crc kubenswrapper[4746]: E1211 10:31:36.311172 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0\": container with ID starting with e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0 not found: ID does not exist" containerID="e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.311197 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0"} err="failed to get container status \"e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0\": rpc error: code = NotFound desc = could not find container \"e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0\": container with ID starting with e3b1e5cdf05599d0e1057a6164453d81f326c8af3c9e3a0b1dda36f00de637b0 not found: ID does not exist" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.311216 4746 scope.go:117] "RemoveContainer" containerID="fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b" Dec 11 10:31:36 crc kubenswrapper[4746]: E1211 10:31:36.311419 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b\": container with ID starting with fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b not found: ID does not exist" containerID="fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.311459 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b"} err="failed to get container status \"fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b\": rpc error: code = NotFound desc = could not find container \"fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b\": container with ID starting with fff4bdf44d84b6db991a260e9f8bb1a2d26e6eca5134a11c46866fade74abb5b not found: ID does not exist" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.314717 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-utilities\") pod \"0ebff108-f640-4d73-a081-a3a3a73e8c20\" (UID: \"0ebff108-f640-4d73-a081-a3a3a73e8c20\") " Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.316173 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-utilities" (OuterVolumeSpecName: "utilities") pod "0ebff108-f640-4d73-a081-a3a3a73e8c20" (UID: "0ebff108-f640-4d73-a081-a3a3a73e8c20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.341239 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ebff108-f640-4d73-a081-a3a3a73e8c20-kube-api-access-9svgv" (OuterVolumeSpecName: "kube-api-access-9svgv") pod "0ebff108-f640-4d73-a081-a3a3a73e8c20" (UID: "0ebff108-f640-4d73-a081-a3a3a73e8c20"). InnerVolumeSpecName "kube-api-access-9svgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.345447 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.345498 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9svgv\" (UniqueName: \"kubernetes.io/projected/0ebff108-f640-4d73-a081-a3a3a73e8c20-kube-api-access-9svgv\") on node \"crc\" DevicePath \"\"" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.398366 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ebff108-f640-4d73-a081-a3a3a73e8c20" (UID: "0ebff108-f640-4d73-a081-a3a3a73e8c20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.448375 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ebff108-f640-4d73-a081-a3a3a73e8c20-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.543613 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-s69t4"] Dec 11 10:31:36 crc kubenswrapper[4746]: I1211 10:31:36.568461 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-s69t4"] Dec 11 10:31:37 crc kubenswrapper[4746]: I1211 10:31:37.643936 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ebff108-f640-4d73-a081-a3a3a73e8c20" path="/var/lib/kubelet/pods/0ebff108-f640-4d73-a081-a3a3a73e8c20/volumes" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.175526 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2hgpl"] Dec 11 10:32:21 crc kubenswrapper[4746]: E1211 10:32:21.176766 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerName="extract-utilities" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.176784 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerName="extract-utilities" Dec 11 10:32:21 crc kubenswrapper[4746]: E1211 10:32:21.176798 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerName="extract-content" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.176806 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerName="extract-content" Dec 11 10:32:21 crc kubenswrapper[4746]: E1211 10:32:21.176836 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerName="extract-content" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.176843 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerName="extract-content" Dec 11 10:32:21 crc kubenswrapper[4746]: E1211 10:32:21.176858 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerName="registry-server" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.176864 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerName="registry-server" Dec 11 10:32:21 crc kubenswrapper[4746]: E1211 10:32:21.176874 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerName="registry-server" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.176880 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerName="registry-server" Dec 11 10:32:21 crc kubenswrapper[4746]: E1211 10:32:21.176889 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerName="extract-utilities" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.176896 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerName="extract-utilities" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.177130 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ebff108-f640-4d73-a081-a3a3a73e8c20" containerName="registry-server" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.177152 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="05b30f79-c4a3-4e93-bcd9-0ae1e6297493" containerName="registry-server" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.178582 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.218018 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hgpl"] Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.275505 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-catalog-content\") pod \"redhat-marketplace-2hgpl\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.275575 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsjqw\" (UniqueName: \"kubernetes.io/projected/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-kube-api-access-fsjqw\") pod \"redhat-marketplace-2hgpl\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.275735 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-utilities\") pod \"redhat-marketplace-2hgpl\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.378265 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-utilities\") pod \"redhat-marketplace-2hgpl\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.378420 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-catalog-content\") pod \"redhat-marketplace-2hgpl\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.378445 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsjqw\" (UniqueName: \"kubernetes.io/projected/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-kube-api-access-fsjqw\") pod \"redhat-marketplace-2hgpl\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.379037 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-utilities\") pod \"redhat-marketplace-2hgpl\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.379074 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-catalog-content\") pod \"redhat-marketplace-2hgpl\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.405983 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsjqw\" (UniqueName: \"kubernetes.io/projected/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-kube-api-access-fsjqw\") pod \"redhat-marketplace-2hgpl\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:21 crc kubenswrapper[4746]: I1211 10:32:21.527482 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:22 crc kubenswrapper[4746]: I1211 10:32:22.019058 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hgpl"] Dec 11 10:32:22 crc kubenswrapper[4746]: I1211 10:32:22.664481 4746 generic.go:334] "Generic (PLEG): container finished" podID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerID="ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d" exitCode=0 Dec 11 10:32:22 crc kubenswrapper[4746]: I1211 10:32:22.664570 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hgpl" event={"ID":"263f836c-83e1-4c43-a06a-6dcd47b0d4e9","Type":"ContainerDied","Data":"ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d"} Dec 11 10:32:22 crc kubenswrapper[4746]: I1211 10:32:22.664873 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hgpl" event={"ID":"263f836c-83e1-4c43-a06a-6dcd47b0d4e9","Type":"ContainerStarted","Data":"0865a889fe24c0b7ee48dd37fc459e9246bec7d47949c6ed410dfac9f42134f5"} Dec 11 10:32:24 crc kubenswrapper[4746]: I1211 10:32:24.703832 4746 generic.go:334] "Generic (PLEG): container finished" podID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerID="efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49" exitCode=0 Dec 11 10:32:24 crc kubenswrapper[4746]: I1211 10:32:24.703954 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hgpl" event={"ID":"263f836c-83e1-4c43-a06a-6dcd47b0d4e9","Type":"ContainerDied","Data":"efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49"} Dec 11 10:32:25 crc kubenswrapper[4746]: I1211 10:32:25.717250 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hgpl" event={"ID":"263f836c-83e1-4c43-a06a-6dcd47b0d4e9","Type":"ContainerStarted","Data":"408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e"} Dec 11 10:32:25 crc kubenswrapper[4746]: I1211 10:32:25.756986 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2hgpl" podStartSLOduration=2.203356822 podStartE2EDuration="4.756935291s" podCreationTimestamp="2025-12-11 10:32:21 +0000 UTC" firstStartedPulling="2025-12-11 10:32:22.666762614 +0000 UTC m=+2315.526625917" lastFinishedPulling="2025-12-11 10:32:25.220341073 +0000 UTC m=+2318.080204386" observedRunningTime="2025-12-11 10:32:25.744657851 +0000 UTC m=+2318.604521164" watchObservedRunningTime="2025-12-11 10:32:25.756935291 +0000 UTC m=+2318.616798605" Dec 11 10:32:31 crc kubenswrapper[4746]: I1211 10:32:31.527815 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:31 crc kubenswrapper[4746]: I1211 10:32:31.528699 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:31 crc kubenswrapper[4746]: I1211 10:32:31.595436 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:31 crc kubenswrapper[4746]: I1211 10:32:31.826719 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:31 crc kubenswrapper[4746]: I1211 10:32:31.887570 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hgpl"] Dec 11 10:32:33 crc kubenswrapper[4746]: I1211 10:32:33.787937 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2hgpl" podUID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerName="registry-server" containerID="cri-o://408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e" gracePeriod=2 Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.241075 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.285334 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-catalog-content\") pod \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.319689 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "263f836c-83e1-4c43-a06a-6dcd47b0d4e9" (UID: "263f836c-83e1-4c43-a06a-6dcd47b0d4e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.387037 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-utilities\") pod \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.387522 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsjqw\" (UniqueName: \"kubernetes.io/projected/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-kube-api-access-fsjqw\") pod \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\" (UID: \"263f836c-83e1-4c43-a06a-6dcd47b0d4e9\") " Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.388033 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.388120 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-utilities" (OuterVolumeSpecName: "utilities") pod "263f836c-83e1-4c43-a06a-6dcd47b0d4e9" (UID: "263f836c-83e1-4c43-a06a-6dcd47b0d4e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.395346 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-kube-api-access-fsjqw" (OuterVolumeSpecName: "kube-api-access-fsjqw") pod "263f836c-83e1-4c43-a06a-6dcd47b0d4e9" (UID: "263f836c-83e1-4c43-a06a-6dcd47b0d4e9"). InnerVolumeSpecName "kube-api-access-fsjqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.489683 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsjqw\" (UniqueName: \"kubernetes.io/projected/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-kube-api-access-fsjqw\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.489722 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/263f836c-83e1-4c43-a06a-6dcd47b0d4e9-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.799523 4746 generic.go:334] "Generic (PLEG): container finished" podID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerID="408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e" exitCode=0 Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.799574 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hgpl" event={"ID":"263f836c-83e1-4c43-a06a-6dcd47b0d4e9","Type":"ContainerDied","Data":"408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e"} Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.799612 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hgpl" event={"ID":"263f836c-83e1-4c43-a06a-6dcd47b0d4e9","Type":"ContainerDied","Data":"0865a889fe24c0b7ee48dd37fc459e9246bec7d47949c6ed410dfac9f42134f5"} Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.799607 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hgpl" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.799631 4746 scope.go:117] "RemoveContainer" containerID="408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.831532 4746 scope.go:117] "RemoveContainer" containerID="efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.841577 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hgpl"] Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.851718 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hgpl"] Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.858241 4746 scope.go:117] "RemoveContainer" containerID="ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.913230 4746 scope.go:117] "RemoveContainer" containerID="408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e" Dec 11 10:32:34 crc kubenswrapper[4746]: E1211 10:32:34.913918 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e\": container with ID starting with 408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e not found: ID does not exist" containerID="408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.913989 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e"} err="failed to get container status \"408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e\": rpc error: code = NotFound desc = could not find container \"408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e\": container with ID starting with 408201c97652f59eb7e94b2df5ee53beba6de6d1482e42cee3dfe7789db2aa8e not found: ID does not exist" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.914093 4746 scope.go:117] "RemoveContainer" containerID="efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49" Dec 11 10:32:34 crc kubenswrapper[4746]: E1211 10:32:34.914520 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49\": container with ID starting with efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49 not found: ID does not exist" containerID="efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.914638 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49"} err="failed to get container status \"efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49\": rpc error: code = NotFound desc = could not find container \"efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49\": container with ID starting with efd5125da3ffa12577d636c9ff23e2bd6647b3eca4a662d7d6a658c702e7de49 not found: ID does not exist" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.914687 4746 scope.go:117] "RemoveContainer" containerID="ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d" Dec 11 10:32:34 crc kubenswrapper[4746]: E1211 10:32:34.915080 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d\": container with ID starting with ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d not found: ID does not exist" containerID="ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d" Dec 11 10:32:34 crc kubenswrapper[4746]: I1211 10:32:34.915118 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d"} err="failed to get container status \"ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d\": rpc error: code = NotFound desc = could not find container \"ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d\": container with ID starting with ad464dcfedcfb783fce8584b241a7a1976d72f1712762e6d7655a623dfec704d not found: ID does not exist" Dec 11 10:32:35 crc kubenswrapper[4746]: I1211 10:32:35.641332 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" path="/var/lib/kubelet/pods/263f836c-83e1-4c43-a06a-6dcd47b0d4e9/volumes" Dec 11 10:32:59 crc kubenswrapper[4746]: I1211 10:32:59.877697 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:32:59 crc kubenswrapper[4746]: I1211 10:32:59.878588 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:33:29 crc kubenswrapper[4746]: I1211 10:33:29.877481 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:33:29 crc kubenswrapper[4746]: I1211 10:33:29.878179 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:33:59 crc kubenswrapper[4746]: I1211 10:33:59.877900 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:33:59 crc kubenswrapper[4746]: I1211 10:33:59.879252 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:33:59 crc kubenswrapper[4746]: I1211 10:33:59.879339 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:33:59 crc kubenswrapper[4746]: I1211 10:33:59.880270 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:33:59 crc kubenswrapper[4746]: I1211 10:33:59.880330 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" gracePeriod=600 Dec 11 10:34:00 crc kubenswrapper[4746]: E1211 10:34:00.014983 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:34:00 crc kubenswrapper[4746]: I1211 10:34:00.737619 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" exitCode=0 Dec 11 10:34:00 crc kubenswrapper[4746]: I1211 10:34:00.737702 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806"} Dec 11 10:34:00 crc kubenswrapper[4746]: I1211 10:34:00.738152 4746 scope.go:117] "RemoveContainer" containerID="9063986f278767b0acc214606a8b6d52c8f55ba41ce46f6d90a49b5d1b51ce33" Dec 11 10:34:00 crc kubenswrapper[4746]: I1211 10:34:00.738936 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:34:00 crc kubenswrapper[4746]: E1211 10:34:00.739223 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:34:12 crc kubenswrapper[4746]: I1211 10:34:12.630862 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:34:12 crc kubenswrapper[4746]: E1211 10:34:12.631911 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:34:14 crc kubenswrapper[4746]: I1211 10:34:14.894544 4746 generic.go:334] "Generic (PLEG): container finished" podID="1424dbeb-f9d9-48d1-8b92-14828c8ea326" containerID="e8bd4e9b898aba2340c9d848c147a9846133e60703b0aef64026bc798cac48a7" exitCode=0 Dec 11 10:34:14 crc kubenswrapper[4746]: I1211 10:34:14.894698 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" event={"ID":"1424dbeb-f9d9-48d1-8b92-14828c8ea326","Type":"ContainerDied","Data":"e8bd4e9b898aba2340c9d848c147a9846133e60703b0aef64026bc798cac48a7"} Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.402529 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.483098 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-secret-0\") pod \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.483226 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-inventory\") pod \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.483324 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-ssh-key\") pod \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.483366 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt27q\" (UniqueName: \"kubernetes.io/projected/1424dbeb-f9d9-48d1-8b92-14828c8ea326-kube-api-access-wt27q\") pod \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.483438 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-combined-ca-bundle\") pod \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\" (UID: \"1424dbeb-f9d9-48d1-8b92-14828c8ea326\") " Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.490073 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1424dbeb-f9d9-48d1-8b92-14828c8ea326" (UID: "1424dbeb-f9d9-48d1-8b92-14828c8ea326"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.490332 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1424dbeb-f9d9-48d1-8b92-14828c8ea326-kube-api-access-wt27q" (OuterVolumeSpecName: "kube-api-access-wt27q") pod "1424dbeb-f9d9-48d1-8b92-14828c8ea326" (UID: "1424dbeb-f9d9-48d1-8b92-14828c8ea326"). InnerVolumeSpecName "kube-api-access-wt27q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.516469 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "1424dbeb-f9d9-48d1-8b92-14828c8ea326" (UID: "1424dbeb-f9d9-48d1-8b92-14828c8ea326"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.516544 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "1424dbeb-f9d9-48d1-8b92-14828c8ea326" (UID: "1424dbeb-f9d9-48d1-8b92-14828c8ea326"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.517967 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-inventory" (OuterVolumeSpecName: "inventory") pod "1424dbeb-f9d9-48d1-8b92-14828c8ea326" (UID: "1424dbeb-f9d9-48d1-8b92-14828c8ea326"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.588124 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.588164 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.588178 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt27q\" (UniqueName: \"kubernetes.io/projected/1424dbeb-f9d9-48d1-8b92-14828c8ea326-kube-api-access-wt27q\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.588192 4746 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.588208 4746 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/1424dbeb-f9d9-48d1-8b92-14828c8ea326-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.987455 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" event={"ID":"1424dbeb-f9d9-48d1-8b92-14828c8ea326","Type":"ContainerDied","Data":"912e42e96b8e18e400e99a809bfb47e3744dbc46e7b33a678415ea64f24793d6"} Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.987502 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="912e42e96b8e18e400e99a809bfb47e3744dbc46e7b33a678415ea64f24793d6" Dec 11 10:34:16 crc kubenswrapper[4746]: I1211 10:34:16.987558 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-85b2t" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.081583 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69"] Dec 11 10:34:17 crc kubenswrapper[4746]: E1211 10:34:17.082098 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerName="registry-server" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.082120 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerName="registry-server" Dec 11 10:34:17 crc kubenswrapper[4746]: E1211 10:34:17.082159 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerName="extract-utilities" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.082169 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerName="extract-utilities" Dec 11 10:34:17 crc kubenswrapper[4746]: E1211 10:34:17.082180 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1424dbeb-f9d9-48d1-8b92-14828c8ea326" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.082189 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="1424dbeb-f9d9-48d1-8b92-14828c8ea326" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 11 10:34:17 crc kubenswrapper[4746]: E1211 10:34:17.082210 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerName="extract-content" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.082219 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerName="extract-content" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.082468 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="263f836c-83e1-4c43-a06a-6dcd47b0d4e9" containerName="registry-server" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.082498 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="1424dbeb-f9d9-48d1-8b92-14828c8ea326" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.083389 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.085817 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.086295 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.086475 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.086614 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.086856 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.087410 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.087805 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.096057 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69"] Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.200752 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.200840 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.200860 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.200975 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.201355 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.201484 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.201545 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.201897 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w865s\" (UniqueName: \"kubernetes.io/projected/731d2759-47a3-4e5e-a753-e2cb1cb7c982-kube-api-access-w865s\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.202011 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.304418 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.304515 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.304546 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.304595 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.304625 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.304664 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.304716 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.304763 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w865s\" (UniqueName: \"kubernetes.io/projected/731d2759-47a3-4e5e-a753-e2cb1cb7c982-kube-api-access-w865s\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.304798 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.306742 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.310118 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.310298 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.311249 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.311597 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.312363 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.312786 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.319978 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.324903 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w865s\" (UniqueName: \"kubernetes.io/projected/731d2759-47a3-4e5e-a753-e2cb1cb7c982-kube-api-access-w865s\") pod \"nova-edpm-deployment-openstack-edpm-ipam-jsx69\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:17 crc kubenswrapper[4746]: I1211 10:34:17.442361 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:34:18 crc kubenswrapper[4746]: I1211 10:34:18.110673 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69"] Dec 11 10:34:19 crc kubenswrapper[4746]: I1211 10:34:19.012778 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" event={"ID":"731d2759-47a3-4e5e-a753-e2cb1cb7c982","Type":"ContainerStarted","Data":"5fe55fbe1ff94b2b68ee1bf44da775c8cd6d96d8b8b2afd448e9e3d885eda5aa"} Dec 11 10:34:20 crc kubenswrapper[4746]: I1211 10:34:20.037392 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" event={"ID":"731d2759-47a3-4e5e-a753-e2cb1cb7c982","Type":"ContainerStarted","Data":"08abcc1fd99c282d00e7e933d569e7c39223c9bdaf6776e2bc136534f4435e7a"} Dec 11 10:34:20 crc kubenswrapper[4746]: I1211 10:34:20.066595 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" podStartSLOduration=2.44214273 podStartE2EDuration="3.066572495s" podCreationTimestamp="2025-12-11 10:34:17 +0000 UTC" firstStartedPulling="2025-12-11 10:34:18.115406991 +0000 UTC m=+2430.975270304" lastFinishedPulling="2025-12-11 10:34:18.739836756 +0000 UTC m=+2431.599700069" observedRunningTime="2025-12-11 10:34:20.061663822 +0000 UTC m=+2432.921527145" watchObservedRunningTime="2025-12-11 10:34:20.066572495 +0000 UTC m=+2432.926435808" Dec 11 10:34:25 crc kubenswrapper[4746]: I1211 10:34:25.631760 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:34:25 crc kubenswrapper[4746]: E1211 10:34:25.632750 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:34:39 crc kubenswrapper[4746]: I1211 10:34:39.631190 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:34:39 crc kubenswrapper[4746]: E1211 10:34:39.632201 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:34:52 crc kubenswrapper[4746]: I1211 10:34:52.630925 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:34:52 crc kubenswrapper[4746]: E1211 10:34:52.632514 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:35:05 crc kubenswrapper[4746]: I1211 10:35:05.630824 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:35:05 crc kubenswrapper[4746]: E1211 10:35:05.631927 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:35:20 crc kubenswrapper[4746]: I1211 10:35:20.630759 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:35:20 crc kubenswrapper[4746]: E1211 10:35:20.631759 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:35:34 crc kubenswrapper[4746]: I1211 10:35:34.631464 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:35:34 crc kubenswrapper[4746]: E1211 10:35:34.632619 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:35:49 crc kubenswrapper[4746]: I1211 10:35:49.631322 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:35:49 crc kubenswrapper[4746]: E1211 10:35:49.634477 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:36:03 crc kubenswrapper[4746]: I1211 10:36:03.631872 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:36:03 crc kubenswrapper[4746]: E1211 10:36:03.633095 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:36:15 crc kubenswrapper[4746]: I1211 10:36:15.631515 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:36:15 crc kubenswrapper[4746]: E1211 10:36:15.632599 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:36:27 crc kubenswrapper[4746]: I1211 10:36:27.640406 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:36:27 crc kubenswrapper[4746]: E1211 10:36:27.641682 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:36:41 crc kubenswrapper[4746]: I1211 10:36:41.630404 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:36:41 crc kubenswrapper[4746]: E1211 10:36:41.631407 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:36:55 crc kubenswrapper[4746]: I1211 10:36:55.631521 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:36:55 crc kubenswrapper[4746]: E1211 10:36:55.632611 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:36:58 crc kubenswrapper[4746]: I1211 10:36:58.606957 4746 generic.go:334] "Generic (PLEG): container finished" podID="731d2759-47a3-4e5e-a753-e2cb1cb7c982" containerID="08abcc1fd99c282d00e7e933d569e7c39223c9bdaf6776e2bc136534f4435e7a" exitCode=0 Dec 11 10:36:58 crc kubenswrapper[4746]: I1211 10:36:58.607065 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" event={"ID":"731d2759-47a3-4e5e-a753-e2cb1cb7c982","Type":"ContainerDied","Data":"08abcc1fd99c282d00e7e933d569e7c39223c9bdaf6776e2bc136534f4435e7a"} Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.061030 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.194728 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w865s\" (UniqueName: \"kubernetes.io/projected/731d2759-47a3-4e5e-a753-e2cb1cb7c982-kube-api-access-w865s\") pod \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.194793 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-extra-config-0\") pod \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.194894 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-inventory\") pod \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.195074 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-0\") pod \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.195103 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-combined-ca-bundle\") pod \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.195193 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-0\") pod \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.195216 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-1\") pod \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.195233 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-ssh-key\") pod \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.195292 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-1\") pod \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\" (UID: \"731d2759-47a3-4e5e-a753-e2cb1cb7c982\") " Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.202062 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "731d2759-47a3-4e5e-a753-e2cb1cb7c982" (UID: "731d2759-47a3-4e5e-a753-e2cb1cb7c982"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.214607 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/731d2759-47a3-4e5e-a753-e2cb1cb7c982-kube-api-access-w865s" (OuterVolumeSpecName: "kube-api-access-w865s") pod "731d2759-47a3-4e5e-a753-e2cb1cb7c982" (UID: "731d2759-47a3-4e5e-a753-e2cb1cb7c982"). InnerVolumeSpecName "kube-api-access-w865s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.229840 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "731d2759-47a3-4e5e-a753-e2cb1cb7c982" (UID: "731d2759-47a3-4e5e-a753-e2cb1cb7c982"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.230727 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "731d2759-47a3-4e5e-a753-e2cb1cb7c982" (UID: "731d2759-47a3-4e5e-a753-e2cb1cb7c982"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.233879 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "731d2759-47a3-4e5e-a753-e2cb1cb7c982" (UID: "731d2759-47a3-4e5e-a753-e2cb1cb7c982"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.234193 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "731d2759-47a3-4e5e-a753-e2cb1cb7c982" (UID: "731d2759-47a3-4e5e-a753-e2cb1cb7c982"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.244104 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-inventory" (OuterVolumeSpecName: "inventory") pod "731d2759-47a3-4e5e-a753-e2cb1cb7c982" (UID: "731d2759-47a3-4e5e-a753-e2cb1cb7c982"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.246523 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "731d2759-47a3-4e5e-a753-e2cb1cb7c982" (UID: "731d2759-47a3-4e5e-a753-e2cb1cb7c982"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.250781 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "731d2759-47a3-4e5e-a753-e2cb1cb7c982" (UID: "731d2759-47a3-4e5e-a753-e2cb1cb7c982"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.297861 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w865s\" (UniqueName: \"kubernetes.io/projected/731d2759-47a3-4e5e-a753-e2cb1cb7c982-kube-api-access-w865s\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.298236 4746 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.298322 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.298411 4746 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.298496 4746 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.298571 4746 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.298646 4746 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.298736 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.298829 4746 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/731d2759-47a3-4e5e-a753-e2cb1cb7c982-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.627604 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" event={"ID":"731d2759-47a3-4e5e-a753-e2cb1cb7c982","Type":"ContainerDied","Data":"5fe55fbe1ff94b2b68ee1bf44da775c8cd6d96d8b8b2afd448e9e3d885eda5aa"} Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.628106 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5fe55fbe1ff94b2b68ee1bf44da775c8cd6d96d8b8b2afd448e9e3d885eda5aa" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.627682 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-jsx69" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.737100 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4"] Dec 11 10:37:00 crc kubenswrapper[4746]: E1211 10:37:00.737523 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="731d2759-47a3-4e5e-a753-e2cb1cb7c982" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.737537 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="731d2759-47a3-4e5e-a753-e2cb1cb7c982" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.737787 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="731d2759-47a3-4e5e-a753-e2cb1cb7c982" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.738537 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.740955 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.741518 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.741685 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.741707 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-hj6dp" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.741925 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.751072 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4"] Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.809328 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.809391 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.809433 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.809483 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmwcw\" (UniqueName: \"kubernetes.io/projected/c19e1748-770d-45a1-b823-77a77b6f22a4-kube-api-access-jmwcw\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.809610 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.809710 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.809734 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.911281 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.911356 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.911446 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.911484 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.911532 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.911567 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmwcw\" (UniqueName: \"kubernetes.io/projected/c19e1748-770d-45a1-b823-77a77b6f22a4-kube-api-access-jmwcw\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.911641 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.917014 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.917507 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.918034 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.918815 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.923271 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.923772 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:00 crc kubenswrapper[4746]: I1211 10:37:00.932458 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmwcw\" (UniqueName: \"kubernetes.io/projected/c19e1748-770d-45a1-b823-77a77b6f22a4-kube-api-access-jmwcw\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:01 crc kubenswrapper[4746]: I1211 10:37:01.061315 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:37:01 crc kubenswrapper[4746]: I1211 10:37:01.617444 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4"] Dec 11 10:37:01 crc kubenswrapper[4746]: I1211 10:37:01.624106 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:37:01 crc kubenswrapper[4746]: I1211 10:37:01.649472 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" event={"ID":"c19e1748-770d-45a1-b823-77a77b6f22a4","Type":"ContainerStarted","Data":"525ead206bc883783fb8c9f339af2e634a0c0859051cfb24ddc3d20044778651"} Dec 11 10:37:02 crc kubenswrapper[4746]: I1211 10:37:02.660734 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" event={"ID":"c19e1748-770d-45a1-b823-77a77b6f22a4","Type":"ContainerStarted","Data":"efaf9f3d8ca2702c949adcac0fa033fc566396038ac72edc080b3200cd1aa96b"} Dec 11 10:37:02 crc kubenswrapper[4746]: I1211 10:37:02.686344 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" podStartSLOduration=2.048837412 podStartE2EDuration="2.686320498s" podCreationTimestamp="2025-12-11 10:37:00 +0000 UTC" firstStartedPulling="2025-12-11 10:37:01.623822447 +0000 UTC m=+2594.483685760" lastFinishedPulling="2025-12-11 10:37:02.261305533 +0000 UTC m=+2595.121168846" observedRunningTime="2025-12-11 10:37:02.683153712 +0000 UTC m=+2595.543017025" watchObservedRunningTime="2025-12-11 10:37:02.686320498 +0000 UTC m=+2595.546183811" Dec 11 10:37:09 crc kubenswrapper[4746]: I1211 10:37:09.632091 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:37:09 crc kubenswrapper[4746]: E1211 10:37:09.633467 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:37:23 crc kubenswrapper[4746]: I1211 10:37:23.630961 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:37:23 crc kubenswrapper[4746]: E1211 10:37:23.632092 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:37:36 crc kubenswrapper[4746]: I1211 10:37:36.630492 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:37:36 crc kubenswrapper[4746]: E1211 10:37:36.631393 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:37:49 crc kubenswrapper[4746]: I1211 10:37:49.631836 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:37:49 crc kubenswrapper[4746]: E1211 10:37:49.633022 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:38:04 crc kubenswrapper[4746]: I1211 10:38:04.630357 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:38:04 crc kubenswrapper[4746]: E1211 10:38:04.631365 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:38:17 crc kubenswrapper[4746]: I1211 10:38:17.636866 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:38:17 crc kubenswrapper[4746]: E1211 10:38:17.638160 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:38:28 crc kubenswrapper[4746]: I1211 10:38:28.632255 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:38:28 crc kubenswrapper[4746]: E1211 10:38:28.633366 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:38:41 crc kubenswrapper[4746]: I1211 10:38:41.631642 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:38:41 crc kubenswrapper[4746]: E1211 10:38:41.632767 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:38:56 crc kubenswrapper[4746]: I1211 10:38:56.630607 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:38:56 crc kubenswrapper[4746]: E1211 10:38:56.631616 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:39:09 crc kubenswrapper[4746]: I1211 10:39:09.631358 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:39:10 crc kubenswrapper[4746]: I1211 10:39:10.011840 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"3ad3260619054a2be7ea78905c03fa7eb1a0df666f52e2085de9df171bfbe05b"} Dec 11 10:39:20 crc kubenswrapper[4746]: I1211 10:39:20.108443 4746 generic.go:334] "Generic (PLEG): container finished" podID="c19e1748-770d-45a1-b823-77a77b6f22a4" containerID="efaf9f3d8ca2702c949adcac0fa033fc566396038ac72edc080b3200cd1aa96b" exitCode=0 Dec 11 10:39:20 crc kubenswrapper[4746]: I1211 10:39:20.108529 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" event={"ID":"c19e1748-770d-45a1-b823-77a77b6f22a4","Type":"ContainerDied","Data":"efaf9f3d8ca2702c949adcac0fa033fc566396038ac72edc080b3200cd1aa96b"} Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.607985 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.722634 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-0\") pod \"c19e1748-770d-45a1-b823-77a77b6f22a4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.722798 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-2\") pod \"c19e1748-770d-45a1-b823-77a77b6f22a4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.722850 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmwcw\" (UniqueName: \"kubernetes.io/projected/c19e1748-770d-45a1-b823-77a77b6f22a4-kube-api-access-jmwcw\") pod \"c19e1748-770d-45a1-b823-77a77b6f22a4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.723232 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-inventory\") pod \"c19e1748-770d-45a1-b823-77a77b6f22a4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.723369 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ssh-key\") pod \"c19e1748-770d-45a1-b823-77a77b6f22a4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.723455 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-telemetry-combined-ca-bundle\") pod \"c19e1748-770d-45a1-b823-77a77b6f22a4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.723531 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-1\") pod \"c19e1748-770d-45a1-b823-77a77b6f22a4\" (UID: \"c19e1748-770d-45a1-b823-77a77b6f22a4\") " Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.735494 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c19e1748-770d-45a1-b823-77a77b6f22a4-kube-api-access-jmwcw" (OuterVolumeSpecName: "kube-api-access-jmwcw") pod "c19e1748-770d-45a1-b823-77a77b6f22a4" (UID: "c19e1748-770d-45a1-b823-77a77b6f22a4"). InnerVolumeSpecName "kube-api-access-jmwcw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.736172 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "c19e1748-770d-45a1-b823-77a77b6f22a4" (UID: "c19e1748-770d-45a1-b823-77a77b6f22a4"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.759909 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "c19e1748-770d-45a1-b823-77a77b6f22a4" (UID: "c19e1748-770d-45a1-b823-77a77b6f22a4"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.768401 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-inventory" (OuterVolumeSpecName: "inventory") pod "c19e1748-770d-45a1-b823-77a77b6f22a4" (UID: "c19e1748-770d-45a1-b823-77a77b6f22a4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.770097 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "c19e1748-770d-45a1-b823-77a77b6f22a4" (UID: "c19e1748-770d-45a1-b823-77a77b6f22a4"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.770523 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c19e1748-770d-45a1-b823-77a77b6f22a4" (UID: "c19e1748-770d-45a1-b823-77a77b6f22a4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.787270 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "c19e1748-770d-45a1-b823-77a77b6f22a4" (UID: "c19e1748-770d-45a1-b823-77a77b6f22a4"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.825916 4746 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.825954 4746 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.825964 4746 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.825975 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmwcw\" (UniqueName: \"kubernetes.io/projected/c19e1748-770d-45a1-b823-77a77b6f22a4-kube-api-access-jmwcw\") on node \"crc\" DevicePath \"\"" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.825988 4746 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-inventory\") on node \"crc\" DevicePath \"\"" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.825999 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:39:21 crc kubenswrapper[4746]: I1211 10:39:21.826008 4746 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c19e1748-770d-45a1-b823-77a77b6f22a4-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 10:39:22 crc kubenswrapper[4746]: I1211 10:39:22.129658 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" Dec 11 10:39:22 crc kubenswrapper[4746]: I1211 10:39:22.129639 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4" event={"ID":"c19e1748-770d-45a1-b823-77a77b6f22a4","Type":"ContainerDied","Data":"525ead206bc883783fb8c9f339af2e634a0c0859051cfb24ddc3d20044778651"} Dec 11 10:39:22 crc kubenswrapper[4746]: I1211 10:39:22.129801 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="525ead206bc883783fb8c9f339af2e634a0c0859051cfb24ddc3d20044778651" Dec 11 10:39:53 crc kubenswrapper[4746]: E1211 10:39:53.481141 4746 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:52298->38.102.83.214:33559: write tcp 38.102.83.214:52298->38.102.83.214:33559: write: broken pipe Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.394704 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 11 10:40:05 crc kubenswrapper[4746]: E1211 10:40:05.396037 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c19e1748-770d-45a1-b823-77a77b6f22a4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.396075 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c19e1748-770d-45a1-b823-77a77b6f22a4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.396343 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c19e1748-770d-45a1-b823-77a77b6f22a4" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.397166 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.400640 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.400822 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kh2nr" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.402888 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.403129 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.412332 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.583389 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-config-data\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.583514 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.583575 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7qw8\" (UniqueName: \"kubernetes.io/projected/dac76301-3400-4177-8a19-8b97a7480321-kube-api-access-j7qw8\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.583726 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.583766 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.583820 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.584037 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.584137 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.584195 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.686358 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-config-data\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.686845 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.686998 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7qw8\" (UniqueName: \"kubernetes.io/projected/dac76301-3400-4177-8a19-8b97a7480321-kube-api-access-j7qw8\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.687186 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.687308 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.687458 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.687698 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.687844 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.687983 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.688075 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.688161 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.688496 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.688494 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.688539 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-config-data\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.697738 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.699605 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.701168 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.718799 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7qw8\" (UniqueName: \"kubernetes.io/projected/dac76301-3400-4177-8a19-8b97a7480321-kube-api-access-j7qw8\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.745346 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"tempest-tests-tempest\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " pod="openstack/tempest-tests-tempest" Dec 11 10:40:05 crc kubenswrapper[4746]: I1211 10:40:05.759061 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 10:40:06 crc kubenswrapper[4746]: I1211 10:40:06.319639 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 11 10:40:06 crc kubenswrapper[4746]: I1211 10:40:06.918672 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dac76301-3400-4177-8a19-8b97a7480321","Type":"ContainerStarted","Data":"ea5dd36a432eaa3689ff2563c926b1a24b8d78edc6e131a9aa7385d7e05699b7"} Dec 11 10:40:11 crc kubenswrapper[4746]: I1211 10:40:11.977179 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j6zbg"] Dec 11 10:40:11 crc kubenswrapper[4746]: I1211 10:40:11.986340 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:11 crc kubenswrapper[4746]: I1211 10:40:11.998979 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6zbg"] Dec 11 10:40:12 crc kubenswrapper[4746]: I1211 10:40:12.080636 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x92wm\" (UniqueName: \"kubernetes.io/projected/d4846c3e-c2e8-496c-9fe0-645723674fe8-kube-api-access-x92wm\") pod \"redhat-operators-j6zbg\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:12 crc kubenswrapper[4746]: I1211 10:40:12.080705 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-utilities\") pod \"redhat-operators-j6zbg\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:12 crc kubenswrapper[4746]: I1211 10:40:12.080950 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-catalog-content\") pod \"redhat-operators-j6zbg\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:12 crc kubenswrapper[4746]: I1211 10:40:12.183225 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x92wm\" (UniqueName: \"kubernetes.io/projected/d4846c3e-c2e8-496c-9fe0-645723674fe8-kube-api-access-x92wm\") pod \"redhat-operators-j6zbg\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:12 crc kubenswrapper[4746]: I1211 10:40:12.183289 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-utilities\") pod \"redhat-operators-j6zbg\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:12 crc kubenswrapper[4746]: I1211 10:40:12.183367 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-catalog-content\") pod \"redhat-operators-j6zbg\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:12 crc kubenswrapper[4746]: I1211 10:40:12.184293 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-catalog-content\") pod \"redhat-operators-j6zbg\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:12 crc kubenswrapper[4746]: I1211 10:40:12.184517 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-utilities\") pod \"redhat-operators-j6zbg\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:12 crc kubenswrapper[4746]: I1211 10:40:12.249545 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x92wm\" (UniqueName: \"kubernetes.io/projected/d4846c3e-c2e8-496c-9fe0-645723674fe8-kube-api-access-x92wm\") pod \"redhat-operators-j6zbg\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:12 crc kubenswrapper[4746]: I1211 10:40:12.320081 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:14 crc kubenswrapper[4746]: I1211 10:40:14.112890 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j6zbg"] Dec 11 10:40:15 crc kubenswrapper[4746]: I1211 10:40:15.017115 4746 generic.go:334] "Generic (PLEG): container finished" podID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerID="ed5ef4b98f9432f614641854ed09a91955a624cec2515b6c93e1378ee08af2a4" exitCode=0 Dec 11 10:40:15 crc kubenswrapper[4746]: I1211 10:40:15.017399 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6zbg" event={"ID":"d4846c3e-c2e8-496c-9fe0-645723674fe8","Type":"ContainerDied","Data":"ed5ef4b98f9432f614641854ed09a91955a624cec2515b6c93e1378ee08af2a4"} Dec 11 10:40:15 crc kubenswrapper[4746]: I1211 10:40:15.017514 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6zbg" event={"ID":"d4846c3e-c2e8-496c-9fe0-645723674fe8","Type":"ContainerStarted","Data":"a26fcd0a4aa05e49b1cd9d69c82a3e5f83ff7cfd099c69766ef6d988f057edc9"} Dec 11 10:40:17 crc kubenswrapper[4746]: I1211 10:40:17.339301 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6zbg" event={"ID":"d4846c3e-c2e8-496c-9fe0-645723674fe8","Type":"ContainerStarted","Data":"83368c09c000adba9e22f391575934bf6b2c1fbacff34c3ff2a70fcd9a546717"} Dec 11 10:40:18 crc kubenswrapper[4746]: I1211 10:40:18.609274 4746 generic.go:334] "Generic (PLEG): container finished" podID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerID="83368c09c000adba9e22f391575934bf6b2c1fbacff34c3ff2a70fcd9a546717" exitCode=0 Dec 11 10:40:18 crc kubenswrapper[4746]: I1211 10:40:18.609335 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6zbg" event={"ID":"d4846c3e-c2e8-496c-9fe0-645723674fe8","Type":"ContainerDied","Data":"83368c09c000adba9e22f391575934bf6b2c1fbacff34c3ff2a70fcd9a546717"} Dec 11 10:40:22 crc kubenswrapper[4746]: I1211 10:40:22.681474 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6zbg" event={"ID":"d4846c3e-c2e8-496c-9fe0-645723674fe8","Type":"ContainerStarted","Data":"f4e0c13f9d728daf7570fcf237dacc6d40dd4e7b8eca0e48abf3f3d3a1a94a48"} Dec 11 10:40:22 crc kubenswrapper[4746]: I1211 10:40:22.711335 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j6zbg" podStartSLOduration=4.671505016 podStartE2EDuration="11.711306138s" podCreationTimestamp="2025-12-11 10:40:11 +0000 UTC" firstStartedPulling="2025-12-11 10:40:15.022078573 +0000 UTC m=+2787.881941886" lastFinishedPulling="2025-12-11 10:40:22.061879695 +0000 UTC m=+2794.921743008" observedRunningTime="2025-12-11 10:40:22.701330749 +0000 UTC m=+2795.561194082" watchObservedRunningTime="2025-12-11 10:40:22.711306138 +0000 UTC m=+2795.571169451" Dec 11 10:40:32 crc kubenswrapper[4746]: I1211 10:40:32.321096 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:32 crc kubenswrapper[4746]: I1211 10:40:32.321606 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:33 crc kubenswrapper[4746]: I1211 10:40:33.369871 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j6zbg" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerName="registry-server" probeResult="failure" output=< Dec 11 10:40:33 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Dec 11 10:40:33 crc kubenswrapper[4746]: > Dec 11 10:40:43 crc kubenswrapper[4746]: I1211 10:40:43.385168 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j6zbg" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerName="registry-server" probeResult="failure" output=< Dec 11 10:40:43 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Dec 11 10:40:43 crc kubenswrapper[4746]: > Dec 11 10:40:47 crc kubenswrapper[4746]: E1211 10:40:47.530448 4746 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Dec 11 10:40:47 crc kubenswrapper[4746]: E1211 10:40:47.531164 4746 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7qw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(dac76301-3400-4177-8a19-8b97a7480321): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 11 10:40:47 crc kubenswrapper[4746]: E1211 10:40:47.533423 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="dac76301-3400-4177-8a19-8b97a7480321" Dec 11 10:40:47 crc kubenswrapper[4746]: E1211 10:40:47.994872 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="dac76301-3400-4177-8a19-8b97a7480321" Dec 11 10:40:52 crc kubenswrapper[4746]: I1211 10:40:52.367909 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:52 crc kubenswrapper[4746]: I1211 10:40:52.429825 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:52 crc kubenswrapper[4746]: I1211 10:40:52.610784 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j6zbg"] Dec 11 10:40:54 crc kubenswrapper[4746]: I1211 10:40:54.039448 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j6zbg" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerName="registry-server" containerID="cri-o://f4e0c13f9d728daf7570fcf237dacc6d40dd4e7b8eca0e48abf3f3d3a1a94a48" gracePeriod=2 Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.059066 4746 generic.go:334] "Generic (PLEG): container finished" podID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerID="f4e0c13f9d728daf7570fcf237dacc6d40dd4e7b8eca0e48abf3f3d3a1a94a48" exitCode=0 Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.059100 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6zbg" event={"ID":"d4846c3e-c2e8-496c-9fe0-645723674fe8","Type":"ContainerDied","Data":"f4e0c13f9d728daf7570fcf237dacc6d40dd4e7b8eca0e48abf3f3d3a1a94a48"} Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.059316 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j6zbg" event={"ID":"d4846c3e-c2e8-496c-9fe0-645723674fe8","Type":"ContainerDied","Data":"a26fcd0a4aa05e49b1cd9d69c82a3e5f83ff7cfd099c69766ef6d988f057edc9"} Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.059342 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a26fcd0a4aa05e49b1cd9d69c82a3e5f83ff7cfd099c69766ef6d988f057edc9" Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.119745 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.250150 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x92wm\" (UniqueName: \"kubernetes.io/projected/d4846c3e-c2e8-496c-9fe0-645723674fe8-kube-api-access-x92wm\") pod \"d4846c3e-c2e8-496c-9fe0-645723674fe8\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.250254 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-catalog-content\") pod \"d4846c3e-c2e8-496c-9fe0-645723674fe8\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.250329 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-utilities\") pod \"d4846c3e-c2e8-496c-9fe0-645723674fe8\" (UID: \"d4846c3e-c2e8-496c-9fe0-645723674fe8\") " Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.250939 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-utilities" (OuterVolumeSpecName: "utilities") pod "d4846c3e-c2e8-496c-9fe0-645723674fe8" (UID: "d4846c3e-c2e8-496c-9fe0-645723674fe8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.262605 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4846c3e-c2e8-496c-9fe0-645723674fe8-kube-api-access-x92wm" (OuterVolumeSpecName: "kube-api-access-x92wm") pod "d4846c3e-c2e8-496c-9fe0-645723674fe8" (UID: "d4846c3e-c2e8-496c-9fe0-645723674fe8"). InnerVolumeSpecName "kube-api-access-x92wm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.353133 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x92wm\" (UniqueName: \"kubernetes.io/projected/d4846c3e-c2e8-496c-9fe0-645723674fe8-kube-api-access-x92wm\") on node \"crc\" DevicePath \"\"" Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.353173 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.361116 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4846c3e-c2e8-496c-9fe0-645723674fe8" (UID: "d4846c3e-c2e8-496c-9fe0-645723674fe8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:40:55 crc kubenswrapper[4746]: I1211 10:40:55.454702 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4846c3e-c2e8-496c-9fe0-645723674fe8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:40:56 crc kubenswrapper[4746]: I1211 10:40:56.068361 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j6zbg" Dec 11 10:40:56 crc kubenswrapper[4746]: I1211 10:40:56.100925 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j6zbg"] Dec 11 10:40:56 crc kubenswrapper[4746]: I1211 10:40:56.114655 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j6zbg"] Dec 11 10:40:57 crc kubenswrapper[4746]: I1211 10:40:57.645773 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" path="/var/lib/kubelet/pods/d4846c3e-c2e8-496c-9fe0-645723674fe8/volumes" Dec 11 10:41:03 crc kubenswrapper[4746]: I1211 10:41:03.381786 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 11 10:41:05 crc kubenswrapper[4746]: I1211 10:41:05.167720 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dac76301-3400-4177-8a19-8b97a7480321","Type":"ContainerStarted","Data":"79cc54401a088d5633969999674d11fa594f30c622faf33e7b459a9278915f97"} Dec 11 10:41:06 crc kubenswrapper[4746]: I1211 10:41:06.197890 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=5.151304315 podStartE2EDuration="1m2.197866931s" podCreationTimestamp="2025-12-11 10:40:04 +0000 UTC" firstStartedPulling="2025-12-11 10:40:06.332161862 +0000 UTC m=+2779.192025175" lastFinishedPulling="2025-12-11 10:41:03.378724478 +0000 UTC m=+2836.238587791" observedRunningTime="2025-12-11 10:41:06.192701833 +0000 UTC m=+2839.052565156" watchObservedRunningTime="2025-12-11 10:41:06.197866931 +0000 UTC m=+2839.057730244" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.656790 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jnk2g"] Dec 11 10:41:28 crc kubenswrapper[4746]: E1211 10:41:28.658857 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerName="extract-content" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.658878 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerName="extract-content" Dec 11 10:41:28 crc kubenswrapper[4746]: E1211 10:41:28.658894 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerName="extract-utilities" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.658900 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerName="extract-utilities" Dec 11 10:41:28 crc kubenswrapper[4746]: E1211 10:41:28.658936 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerName="registry-server" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.658942 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerName="registry-server" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.659275 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4846c3e-c2e8-496c-9fe0-645723674fe8" containerName="registry-server" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.668319 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.681096 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jnk2g"] Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.764691 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-utilities\") pod \"community-operators-jnk2g\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.765116 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-catalog-content\") pod \"community-operators-jnk2g\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.765415 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p67t2\" (UniqueName: \"kubernetes.io/projected/0abddcd7-112f-450d-b7fb-c0d876bd3f51-kube-api-access-p67t2\") pod \"community-operators-jnk2g\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.867275 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p67t2\" (UniqueName: \"kubernetes.io/projected/0abddcd7-112f-450d-b7fb-c0d876bd3f51-kube-api-access-p67t2\") pod \"community-operators-jnk2g\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.867435 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-utilities\") pod \"community-operators-jnk2g\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.868110 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-utilities\") pod \"community-operators-jnk2g\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.868194 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-catalog-content\") pod \"community-operators-jnk2g\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.868445 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-catalog-content\") pod \"community-operators-jnk2g\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.890143 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p67t2\" (UniqueName: \"kubernetes.io/projected/0abddcd7-112f-450d-b7fb-c0d876bd3f51-kube-api-access-p67t2\") pod \"community-operators-jnk2g\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:28 crc kubenswrapper[4746]: I1211 10:41:28.998650 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:29 crc kubenswrapper[4746]: I1211 10:41:29.608678 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jnk2g"] Dec 11 10:41:29 crc kubenswrapper[4746]: I1211 10:41:29.877804 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:41:29 crc kubenswrapper[4746]: I1211 10:41:29.877904 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:41:30 crc kubenswrapper[4746]: I1211 10:41:30.428091 4746 generic.go:334] "Generic (PLEG): container finished" podID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerID="19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e" exitCode=0 Dec 11 10:41:30 crc kubenswrapper[4746]: I1211 10:41:30.428172 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnk2g" event={"ID":"0abddcd7-112f-450d-b7fb-c0d876bd3f51","Type":"ContainerDied","Data":"19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e"} Dec 11 10:41:30 crc kubenswrapper[4746]: I1211 10:41:30.428209 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnk2g" event={"ID":"0abddcd7-112f-450d-b7fb-c0d876bd3f51","Type":"ContainerStarted","Data":"ffe07b25eb3b036b973e6ae3449e776a409c7c6fa5d63b02f99e3e84345984f1"} Dec 11 10:41:32 crc kubenswrapper[4746]: I1211 10:41:32.453992 4746 generic.go:334] "Generic (PLEG): container finished" podID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerID="1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a" exitCode=0 Dec 11 10:41:32 crc kubenswrapper[4746]: I1211 10:41:32.454096 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnk2g" event={"ID":"0abddcd7-112f-450d-b7fb-c0d876bd3f51","Type":"ContainerDied","Data":"1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a"} Dec 11 10:41:33 crc kubenswrapper[4746]: I1211 10:41:33.467011 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnk2g" event={"ID":"0abddcd7-112f-450d-b7fb-c0d876bd3f51","Type":"ContainerStarted","Data":"168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31"} Dec 11 10:41:33 crc kubenswrapper[4746]: I1211 10:41:33.491281 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jnk2g" podStartSLOduration=2.904061994 podStartE2EDuration="5.491258934s" podCreationTimestamp="2025-12-11 10:41:28 +0000 UTC" firstStartedPulling="2025-12-11 10:41:30.4322004 +0000 UTC m=+2863.292063733" lastFinishedPulling="2025-12-11 10:41:33.01939736 +0000 UTC m=+2865.879260673" observedRunningTime="2025-12-11 10:41:33.48482375 +0000 UTC m=+2866.344687083" watchObservedRunningTime="2025-12-11 10:41:33.491258934 +0000 UTC m=+2866.351122257" Dec 11 10:41:39 crc kubenswrapper[4746]: I1211 10:41:38.999410 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:39 crc kubenswrapper[4746]: I1211 10:41:39.000241 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:39 crc kubenswrapper[4746]: I1211 10:41:39.079140 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:39 crc kubenswrapper[4746]: I1211 10:41:39.605473 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.590741 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m82pz"] Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.594801 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.631758 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m82pz"] Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.705944 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlgff\" (UniqueName: \"kubernetes.io/projected/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-kube-api-access-dlgff\") pod \"certified-operators-m82pz\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.706149 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-utilities\") pod \"certified-operators-m82pz\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.706234 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-catalog-content\") pod \"certified-operators-m82pz\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.808601 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-utilities\") pod \"certified-operators-m82pz\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.808680 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-catalog-content\") pod \"certified-operators-m82pz\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.808760 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlgff\" (UniqueName: \"kubernetes.io/projected/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-kube-api-access-dlgff\") pod \"certified-operators-m82pz\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.809244 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-utilities\") pod \"certified-operators-m82pz\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.809380 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-catalog-content\") pod \"certified-operators-m82pz\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.834140 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlgff\" (UniqueName: \"kubernetes.io/projected/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-kube-api-access-dlgff\") pod \"certified-operators-m82pz\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:40 crc kubenswrapper[4746]: I1211 10:41:40.918190 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:41 crc kubenswrapper[4746]: I1211 10:41:41.404533 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m82pz"] Dec 11 10:41:41 crc kubenswrapper[4746]: I1211 10:41:41.581512 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m82pz" event={"ID":"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e","Type":"ContainerStarted","Data":"505019db7414eb66e933d7fbeb9e3e246c32c9442806864e3ef4b9d83198aa29"} Dec 11 10:41:42 crc kubenswrapper[4746]: I1211 10:41:42.605272 4746 generic.go:334] "Generic (PLEG): container finished" podID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerID="d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf" exitCode=0 Dec 11 10:41:42 crc kubenswrapper[4746]: I1211 10:41:42.605684 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m82pz" event={"ID":"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e","Type":"ContainerDied","Data":"d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf"} Dec 11 10:41:42 crc kubenswrapper[4746]: I1211 10:41:42.958732 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jnk2g"] Dec 11 10:41:42 crc kubenswrapper[4746]: I1211 10:41:42.959028 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jnk2g" podUID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerName="registry-server" containerID="cri-o://168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31" gracePeriod=2 Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.508178 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.573978 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-catalog-content\") pod \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.574078 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p67t2\" (UniqueName: \"kubernetes.io/projected/0abddcd7-112f-450d-b7fb-c0d876bd3f51-kube-api-access-p67t2\") pod \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.574298 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-utilities\") pod \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\" (UID: \"0abddcd7-112f-450d-b7fb-c0d876bd3f51\") " Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.575622 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-utilities" (OuterVolumeSpecName: "utilities") pod "0abddcd7-112f-450d-b7fb-c0d876bd3f51" (UID: "0abddcd7-112f-450d-b7fb-c0d876bd3f51"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.583461 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0abddcd7-112f-450d-b7fb-c0d876bd3f51-kube-api-access-p67t2" (OuterVolumeSpecName: "kube-api-access-p67t2") pod "0abddcd7-112f-450d-b7fb-c0d876bd3f51" (UID: "0abddcd7-112f-450d-b7fb-c0d876bd3f51"). InnerVolumeSpecName "kube-api-access-p67t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.617845 4746 generic.go:334] "Generic (PLEG): container finished" podID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerID="168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31" exitCode=0 Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.617968 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jnk2g" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.617932 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnk2g" event={"ID":"0abddcd7-112f-450d-b7fb-c0d876bd3f51","Type":"ContainerDied","Data":"168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31"} Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.618033 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jnk2g" event={"ID":"0abddcd7-112f-450d-b7fb-c0d876bd3f51","Type":"ContainerDied","Data":"ffe07b25eb3b036b973e6ae3449e776a409c7c6fa5d63b02f99e3e84345984f1"} Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.618111 4746 scope.go:117] "RemoveContainer" containerID="168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.629324 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0abddcd7-112f-450d-b7fb-c0d876bd3f51" (UID: "0abddcd7-112f-450d-b7fb-c0d876bd3f51"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.676495 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.676973 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0abddcd7-112f-450d-b7fb-c0d876bd3f51-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.676987 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p67t2\" (UniqueName: \"kubernetes.io/projected/0abddcd7-112f-450d-b7fb-c0d876bd3f51-kube-api-access-p67t2\") on node \"crc\" DevicePath \"\"" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.717966 4746 scope.go:117] "RemoveContainer" containerID="1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.758585 4746 scope.go:117] "RemoveContainer" containerID="19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.791467 4746 scope.go:117] "RemoveContainer" containerID="168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31" Dec 11 10:41:43 crc kubenswrapper[4746]: E1211 10:41:43.792129 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31\": container with ID starting with 168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31 not found: ID does not exist" containerID="168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.792192 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31"} err="failed to get container status \"168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31\": rpc error: code = NotFound desc = could not find container \"168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31\": container with ID starting with 168c65602b6e5bba388b95a7d880c872ce40989e8445816c40f4d82aee952a31 not found: ID does not exist" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.792226 4746 scope.go:117] "RemoveContainer" containerID="1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a" Dec 11 10:41:43 crc kubenswrapper[4746]: E1211 10:41:43.792616 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a\": container with ID starting with 1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a not found: ID does not exist" containerID="1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.792654 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a"} err="failed to get container status \"1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a\": rpc error: code = NotFound desc = could not find container \"1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a\": container with ID starting with 1559f2eecff9703c424ba8a3fcb8fe479d0491e2d3426740240260c2f8aff83a not found: ID does not exist" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.792682 4746 scope.go:117] "RemoveContainer" containerID="19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e" Dec 11 10:41:43 crc kubenswrapper[4746]: E1211 10:41:43.793103 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e\": container with ID starting with 19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e not found: ID does not exist" containerID="19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.793167 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e"} err="failed to get container status \"19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e\": rpc error: code = NotFound desc = could not find container \"19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e\": container with ID starting with 19d6318e55d9b974cfec4183bd0d4e94499dc594711952f61fb3823c1481746e not found: ID does not exist" Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.950016 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jnk2g"] Dec 11 10:41:43 crc kubenswrapper[4746]: I1211 10:41:43.964304 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jnk2g"] Dec 11 10:41:44 crc kubenswrapper[4746]: I1211 10:41:44.633318 4746 generic.go:334] "Generic (PLEG): container finished" podID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerID="640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259" exitCode=0 Dec 11 10:41:44 crc kubenswrapper[4746]: I1211 10:41:44.633366 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m82pz" event={"ID":"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e","Type":"ContainerDied","Data":"640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259"} Dec 11 10:41:45 crc kubenswrapper[4746]: I1211 10:41:45.643657 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" path="/var/lib/kubelet/pods/0abddcd7-112f-450d-b7fb-c0d876bd3f51/volumes" Dec 11 10:41:45 crc kubenswrapper[4746]: I1211 10:41:45.651086 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m82pz" event={"ID":"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e","Type":"ContainerStarted","Data":"1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab"} Dec 11 10:41:45 crc kubenswrapper[4746]: I1211 10:41:45.687919 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m82pz" podStartSLOduration=2.9380553259999997 podStartE2EDuration="5.687887461s" podCreationTimestamp="2025-12-11 10:41:40 +0000 UTC" firstStartedPulling="2025-12-11 10:41:42.611185961 +0000 UTC m=+2875.471049274" lastFinishedPulling="2025-12-11 10:41:45.361018096 +0000 UTC m=+2878.220881409" observedRunningTime="2025-12-11 10:41:45.670517772 +0000 UTC m=+2878.530381085" watchObservedRunningTime="2025-12-11 10:41:45.687887461 +0000 UTC m=+2878.547750774" Dec 11 10:41:50 crc kubenswrapper[4746]: I1211 10:41:50.918964 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:50 crc kubenswrapper[4746]: I1211 10:41:50.920069 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:50 crc kubenswrapper[4746]: I1211 10:41:50.976242 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:51 crc kubenswrapper[4746]: I1211 10:41:51.789699 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:52 crc kubenswrapper[4746]: I1211 10:41:52.763813 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m82pz"] Dec 11 10:41:53 crc kubenswrapper[4746]: I1211 10:41:53.759598 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m82pz" podUID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerName="registry-server" containerID="cri-o://1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab" gracePeriod=2 Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.416658 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.524964 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-utilities\") pod \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.525062 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlgff\" (UniqueName: \"kubernetes.io/projected/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-kube-api-access-dlgff\") pod \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.525106 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-catalog-content\") pod \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\" (UID: \"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e\") " Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.526165 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-utilities" (OuterVolumeSpecName: "utilities") pod "0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" (UID: "0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.535411 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-kube-api-access-dlgff" (OuterVolumeSpecName: "kube-api-access-dlgff") pod "0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" (UID: "0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e"). InnerVolumeSpecName "kube-api-access-dlgff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.627764 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.627835 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlgff\" (UniqueName: \"kubernetes.io/projected/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-kube-api-access-dlgff\") on node \"crc\" DevicePath \"\"" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.771418 4746 generic.go:334] "Generic (PLEG): container finished" podID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerID="1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab" exitCode=0 Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.771487 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m82pz" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.771491 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m82pz" event={"ID":"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e","Type":"ContainerDied","Data":"1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab"} Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.772004 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m82pz" event={"ID":"0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e","Type":"ContainerDied","Data":"505019db7414eb66e933d7fbeb9e3e246c32c9442806864e3ef4b9d83198aa29"} Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.772030 4746 scope.go:117] "RemoveContainer" containerID="1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.800864 4746 scope.go:117] "RemoveContainer" containerID="640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.830402 4746 scope.go:117] "RemoveContainer" containerID="d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.837456 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" (UID: "0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.888903 4746 scope.go:117] "RemoveContainer" containerID="1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab" Dec 11 10:41:54 crc kubenswrapper[4746]: E1211 10:41:54.889334 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab\": container with ID starting with 1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab not found: ID does not exist" containerID="1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.889380 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab"} err="failed to get container status \"1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab\": rpc error: code = NotFound desc = could not find container \"1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab\": container with ID starting with 1acf0b7c84c2aa86013dc4f5d07ab2cae225604f6acbbf4637074a59d88eb0ab not found: ID does not exist" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.889407 4746 scope.go:117] "RemoveContainer" containerID="640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259" Dec 11 10:41:54 crc kubenswrapper[4746]: E1211 10:41:54.889617 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259\": container with ID starting with 640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259 not found: ID does not exist" containerID="640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.889643 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259"} err="failed to get container status \"640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259\": rpc error: code = NotFound desc = could not find container \"640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259\": container with ID starting with 640a926747f5c4a9303fc7a754f7e8ed950576b6ddda44a89396382bdf3dd259 not found: ID does not exist" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.889660 4746 scope.go:117] "RemoveContainer" containerID="d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf" Dec 11 10:41:54 crc kubenswrapper[4746]: E1211 10:41:54.889869 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf\": container with ID starting with d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf not found: ID does not exist" containerID="d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.889896 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf"} err="failed to get container status \"d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf\": rpc error: code = NotFound desc = could not find container \"d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf\": container with ID starting with d87f6e500597ad4239d81fe84f70a8ff67e160b156aa45cfd5e367d70aa28fbf not found: ID does not exist" Dec 11 10:41:54 crc kubenswrapper[4746]: I1211 10:41:54.934191 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:41:55 crc kubenswrapper[4746]: I1211 10:41:55.110242 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m82pz"] Dec 11 10:41:55 crc kubenswrapper[4746]: I1211 10:41:55.119945 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m82pz"] Dec 11 10:41:55 crc kubenswrapper[4746]: I1211 10:41:55.643465 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" path="/var/lib/kubelet/pods/0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e/volumes" Dec 11 10:41:59 crc kubenswrapper[4746]: I1211 10:41:59.877981 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:41:59 crc kubenswrapper[4746]: I1211 10:41:59.878597 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.721805 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rnszx"] Dec 11 10:42:26 crc kubenswrapper[4746]: E1211 10:42:26.722981 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerName="extract-content" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.723003 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerName="extract-content" Dec 11 10:42:26 crc kubenswrapper[4746]: E1211 10:42:26.723068 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerName="extract-utilities" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.723079 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerName="extract-utilities" Dec 11 10:42:26 crc kubenswrapper[4746]: E1211 10:42:26.723102 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerName="extract-content" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.723110 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerName="extract-content" Dec 11 10:42:26 crc kubenswrapper[4746]: E1211 10:42:26.723133 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerName="extract-utilities" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.723142 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerName="extract-utilities" Dec 11 10:42:26 crc kubenswrapper[4746]: E1211 10:42:26.723161 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerName="registry-server" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.723169 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerName="registry-server" Dec 11 10:42:26 crc kubenswrapper[4746]: E1211 10:42:26.723186 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerName="registry-server" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.723193 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerName="registry-server" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.723419 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abddcd7-112f-450d-b7fb-c0d876bd3f51" containerName="registry-server" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.723453 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f22ba0e-b8b0-4f58-8a7f-c6ecbea08e7e" containerName="registry-server" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.725172 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.737824 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnszx"] Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.808795 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdpbn\" (UniqueName: \"kubernetes.io/projected/15170941-c85a-43a8-8fc1-d2c6d4117eab-kube-api-access-vdpbn\") pod \"redhat-marketplace-rnszx\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.808900 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-utilities\") pod \"redhat-marketplace-rnszx\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.808921 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-catalog-content\") pod \"redhat-marketplace-rnszx\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.911079 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdpbn\" (UniqueName: \"kubernetes.io/projected/15170941-c85a-43a8-8fc1-d2c6d4117eab-kube-api-access-vdpbn\") pod \"redhat-marketplace-rnszx\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.911277 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-catalog-content\") pod \"redhat-marketplace-rnszx\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.911307 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-utilities\") pod \"redhat-marketplace-rnszx\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.911902 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-catalog-content\") pod \"redhat-marketplace-rnszx\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.912077 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-utilities\") pod \"redhat-marketplace-rnszx\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:26 crc kubenswrapper[4746]: I1211 10:42:26.954020 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdpbn\" (UniqueName: \"kubernetes.io/projected/15170941-c85a-43a8-8fc1-d2c6d4117eab-kube-api-access-vdpbn\") pod \"redhat-marketplace-rnszx\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:27 crc kubenswrapper[4746]: I1211 10:42:27.093228 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:27 crc kubenswrapper[4746]: I1211 10:42:27.737514 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnszx"] Dec 11 10:42:28 crc kubenswrapper[4746]: I1211 10:42:28.106242 4746 generic.go:334] "Generic (PLEG): container finished" podID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerID="f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54" exitCode=0 Dec 11 10:42:28 crc kubenswrapper[4746]: I1211 10:42:28.106345 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnszx" event={"ID":"15170941-c85a-43a8-8fc1-d2c6d4117eab","Type":"ContainerDied","Data":"f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54"} Dec 11 10:42:28 crc kubenswrapper[4746]: I1211 10:42:28.106943 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnszx" event={"ID":"15170941-c85a-43a8-8fc1-d2c6d4117eab","Type":"ContainerStarted","Data":"4c8c23be67ece44ca65cd8ad921cb1b99449cdb1696f365b1ce98d478bb7ba8d"} Dec 11 10:42:28 crc kubenswrapper[4746]: I1211 10:42:28.108774 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:42:29 crc kubenswrapper[4746]: I1211 10:42:29.119150 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnszx" event={"ID":"15170941-c85a-43a8-8fc1-d2c6d4117eab","Type":"ContainerStarted","Data":"b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0"} Dec 11 10:42:29 crc kubenswrapper[4746]: I1211 10:42:29.878205 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:42:29 crc kubenswrapper[4746]: I1211 10:42:29.878601 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:42:29 crc kubenswrapper[4746]: I1211 10:42:29.878667 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:42:29 crc kubenswrapper[4746]: I1211 10:42:29.879776 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ad3260619054a2be7ea78905c03fa7eb1a0df666f52e2085de9df171bfbe05b"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:42:29 crc kubenswrapper[4746]: I1211 10:42:29.879878 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://3ad3260619054a2be7ea78905c03fa7eb1a0df666f52e2085de9df171bfbe05b" gracePeriod=600 Dec 11 10:42:30 crc kubenswrapper[4746]: I1211 10:42:30.152866 4746 generic.go:334] "Generic (PLEG): container finished" podID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerID="b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0" exitCode=0 Dec 11 10:42:30 crc kubenswrapper[4746]: I1211 10:42:30.152946 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnszx" event={"ID":"15170941-c85a-43a8-8fc1-d2c6d4117eab","Type":"ContainerDied","Data":"b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0"} Dec 11 10:42:30 crc kubenswrapper[4746]: I1211 10:42:30.157447 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="3ad3260619054a2be7ea78905c03fa7eb1a0df666f52e2085de9df171bfbe05b" exitCode=0 Dec 11 10:42:30 crc kubenswrapper[4746]: I1211 10:42:30.157516 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"3ad3260619054a2be7ea78905c03fa7eb1a0df666f52e2085de9df171bfbe05b"} Dec 11 10:42:30 crc kubenswrapper[4746]: I1211 10:42:30.157583 4746 scope.go:117] "RemoveContainer" containerID="6ce95011eb820609326bbf077c23f075b4eaf1eb00f5addb17dbda08aa22b806" Dec 11 10:42:31 crc kubenswrapper[4746]: I1211 10:42:31.169694 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61"} Dec 11 10:42:31 crc kubenswrapper[4746]: I1211 10:42:31.177292 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnszx" event={"ID":"15170941-c85a-43a8-8fc1-d2c6d4117eab","Type":"ContainerStarted","Data":"5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436"} Dec 11 10:42:31 crc kubenswrapper[4746]: I1211 10:42:31.233760 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rnszx" podStartSLOduration=2.688782026 podStartE2EDuration="5.233735706s" podCreationTimestamp="2025-12-11 10:42:26 +0000 UTC" firstStartedPulling="2025-12-11 10:42:28.108331832 +0000 UTC m=+2920.968195145" lastFinishedPulling="2025-12-11 10:42:30.653285512 +0000 UTC m=+2923.513148825" observedRunningTime="2025-12-11 10:42:31.218791842 +0000 UTC m=+2924.078655155" watchObservedRunningTime="2025-12-11 10:42:31.233735706 +0000 UTC m=+2924.093599029" Dec 11 10:42:37 crc kubenswrapper[4746]: I1211 10:42:37.094621 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:37 crc kubenswrapper[4746]: I1211 10:42:37.095275 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:37 crc kubenswrapper[4746]: I1211 10:42:37.148588 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:37 crc kubenswrapper[4746]: I1211 10:42:37.289988 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:37 crc kubenswrapper[4746]: I1211 10:42:37.389105 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnszx"] Dec 11 10:42:39 crc kubenswrapper[4746]: I1211 10:42:39.254969 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rnszx" podUID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerName="registry-server" containerID="cri-o://5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436" gracePeriod=2 Dec 11 10:42:39 crc kubenswrapper[4746]: I1211 10:42:39.773305 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:39 crc kubenswrapper[4746]: I1211 10:42:39.916813 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-utilities\") pod \"15170941-c85a-43a8-8fc1-d2c6d4117eab\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " Dec 11 10:42:39 crc kubenswrapper[4746]: I1211 10:42:39.917368 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-catalog-content\") pod \"15170941-c85a-43a8-8fc1-d2c6d4117eab\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " Dec 11 10:42:39 crc kubenswrapper[4746]: I1211 10:42:39.917446 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdpbn\" (UniqueName: \"kubernetes.io/projected/15170941-c85a-43a8-8fc1-d2c6d4117eab-kube-api-access-vdpbn\") pod \"15170941-c85a-43a8-8fc1-d2c6d4117eab\" (UID: \"15170941-c85a-43a8-8fc1-d2c6d4117eab\") " Dec 11 10:42:39 crc kubenswrapper[4746]: I1211 10:42:39.918988 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-utilities" (OuterVolumeSpecName: "utilities") pod "15170941-c85a-43a8-8fc1-d2c6d4117eab" (UID: "15170941-c85a-43a8-8fc1-d2c6d4117eab"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:42:39 crc kubenswrapper[4746]: I1211 10:42:39.925689 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15170941-c85a-43a8-8fc1-d2c6d4117eab-kube-api-access-vdpbn" (OuterVolumeSpecName: "kube-api-access-vdpbn") pod "15170941-c85a-43a8-8fc1-d2c6d4117eab" (UID: "15170941-c85a-43a8-8fc1-d2c6d4117eab"). InnerVolumeSpecName "kube-api-access-vdpbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:42:39 crc kubenswrapper[4746]: I1211 10:42:39.942583 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15170941-c85a-43a8-8fc1-d2c6d4117eab" (UID: "15170941-c85a-43a8-8fc1-d2c6d4117eab"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.020081 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.020137 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15170941-c85a-43a8-8fc1-d2c6d4117eab-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.020154 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdpbn\" (UniqueName: \"kubernetes.io/projected/15170941-c85a-43a8-8fc1-d2c6d4117eab-kube-api-access-vdpbn\") on node \"crc\" DevicePath \"\"" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.268289 4746 generic.go:334] "Generic (PLEG): container finished" podID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerID="5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436" exitCode=0 Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.268341 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnszx" event={"ID":"15170941-c85a-43a8-8fc1-d2c6d4117eab","Type":"ContainerDied","Data":"5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436"} Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.268372 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnszx" event={"ID":"15170941-c85a-43a8-8fc1-d2c6d4117eab","Type":"ContainerDied","Data":"4c8c23be67ece44ca65cd8ad921cb1b99449cdb1696f365b1ce98d478bb7ba8d"} Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.268392 4746 scope.go:117] "RemoveContainer" containerID="5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.270002 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnszx" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.302610 4746 scope.go:117] "RemoveContainer" containerID="b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.342127 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnszx"] Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.360232 4746 scope.go:117] "RemoveContainer" containerID="f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.372310 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnszx"] Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.424000 4746 scope.go:117] "RemoveContainer" containerID="5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436" Dec 11 10:42:40 crc kubenswrapper[4746]: E1211 10:42:40.426690 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436\": container with ID starting with 5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436 not found: ID does not exist" containerID="5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.426736 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436"} err="failed to get container status \"5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436\": rpc error: code = NotFound desc = could not find container \"5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436\": container with ID starting with 5f37104d8157b7af25fea5c8a6701e21822e9be6c33686034ee085d894c48436 not found: ID does not exist" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.426768 4746 scope.go:117] "RemoveContainer" containerID="b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0" Dec 11 10:42:40 crc kubenswrapper[4746]: E1211 10:42:40.428123 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0\": container with ID starting with b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0 not found: ID does not exist" containerID="b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.428152 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0"} err="failed to get container status \"b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0\": rpc error: code = NotFound desc = could not find container \"b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0\": container with ID starting with b39c120de0ce217271376175ab65ace4f271673b3a6cc54bca2fe7044e8961e0 not found: ID does not exist" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.428169 4746 scope.go:117] "RemoveContainer" containerID="f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54" Dec 11 10:42:40 crc kubenswrapper[4746]: E1211 10:42:40.428452 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54\": container with ID starting with f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54 not found: ID does not exist" containerID="f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54" Dec 11 10:42:40 crc kubenswrapper[4746]: I1211 10:42:40.428504 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54"} err="failed to get container status \"f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54\": rpc error: code = NotFound desc = could not find container \"f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54\": container with ID starting with f822f801b91b10a8202beeb20cee27f043a0977468eed61243ae6bb5d2333e54 not found: ID does not exist" Dec 11 10:42:41 crc kubenswrapper[4746]: I1211 10:42:41.646987 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15170941-c85a-43a8-8fc1-d2c6d4117eab" path="/var/lib/kubelet/pods/15170941-c85a-43a8-8fc1-d2c6d4117eab/volumes" Dec 11 10:44:59 crc kubenswrapper[4746]: I1211 10:44:59.877533 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:44:59 crc kubenswrapper[4746]: I1211 10:44:59.878256 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.270478 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4"] Dec 11 10:45:00 crc kubenswrapper[4746]: E1211 10:45:00.271042 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerName="extract-utilities" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.271073 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerName="extract-utilities" Dec 11 10:45:00 crc kubenswrapper[4746]: E1211 10:45:00.271123 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerName="extract-content" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.271133 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerName="extract-content" Dec 11 10:45:00 crc kubenswrapper[4746]: E1211 10:45:00.271140 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerName="registry-server" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.271147 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerName="registry-server" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.271352 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="15170941-c85a-43a8-8fc1-d2c6d4117eab" containerName="registry-server" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.272178 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.275414 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.275566 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.282852 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4"] Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.439913 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8nhd\" (UniqueName: \"kubernetes.io/projected/76d3a76d-4797-4b10-96fe-37a8791e675f-kube-api-access-d8nhd\") pod \"collect-profiles-29424165-jjzv4\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.440373 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76d3a76d-4797-4b10-96fe-37a8791e675f-config-volume\") pod \"collect-profiles-29424165-jjzv4\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.440484 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76d3a76d-4797-4b10-96fe-37a8791e675f-secret-volume\") pod \"collect-profiles-29424165-jjzv4\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.542369 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8nhd\" (UniqueName: \"kubernetes.io/projected/76d3a76d-4797-4b10-96fe-37a8791e675f-kube-api-access-d8nhd\") pod \"collect-profiles-29424165-jjzv4\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.542493 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76d3a76d-4797-4b10-96fe-37a8791e675f-config-volume\") pod \"collect-profiles-29424165-jjzv4\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.542528 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76d3a76d-4797-4b10-96fe-37a8791e675f-secret-volume\") pod \"collect-profiles-29424165-jjzv4\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.544462 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76d3a76d-4797-4b10-96fe-37a8791e675f-config-volume\") pod \"collect-profiles-29424165-jjzv4\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.550901 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76d3a76d-4797-4b10-96fe-37a8791e675f-secret-volume\") pod \"collect-profiles-29424165-jjzv4\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.564191 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8nhd\" (UniqueName: \"kubernetes.io/projected/76d3a76d-4797-4b10-96fe-37a8791e675f-kube-api-access-d8nhd\") pod \"collect-profiles-29424165-jjzv4\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:00 crc kubenswrapper[4746]: I1211 10:45:00.594692 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:01 crc kubenswrapper[4746]: I1211 10:45:01.053696 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4"] Dec 11 10:45:01 crc kubenswrapper[4746]: I1211 10:45:01.975718 4746 generic.go:334] "Generic (PLEG): container finished" podID="76d3a76d-4797-4b10-96fe-37a8791e675f" containerID="7fdc28cb4d1d545ce0c34968afbc6e7ff701b63e71ce7e5be643e00873bb67f1" exitCode=0 Dec 11 10:45:01 crc kubenswrapper[4746]: I1211 10:45:01.976031 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" event={"ID":"76d3a76d-4797-4b10-96fe-37a8791e675f","Type":"ContainerDied","Data":"7fdc28cb4d1d545ce0c34968afbc6e7ff701b63e71ce7e5be643e00873bb67f1"} Dec 11 10:45:01 crc kubenswrapper[4746]: I1211 10:45:01.976081 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" event={"ID":"76d3a76d-4797-4b10-96fe-37a8791e675f","Type":"ContainerStarted","Data":"c4800b029316efb2532556bb03c3c85e04dbd6237df7c01f966f4e95be34488d"} Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.394350 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.506461 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76d3a76d-4797-4b10-96fe-37a8791e675f-secret-volume\") pod \"76d3a76d-4797-4b10-96fe-37a8791e675f\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.506520 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8nhd\" (UniqueName: \"kubernetes.io/projected/76d3a76d-4797-4b10-96fe-37a8791e675f-kube-api-access-d8nhd\") pod \"76d3a76d-4797-4b10-96fe-37a8791e675f\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.506564 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76d3a76d-4797-4b10-96fe-37a8791e675f-config-volume\") pod \"76d3a76d-4797-4b10-96fe-37a8791e675f\" (UID: \"76d3a76d-4797-4b10-96fe-37a8791e675f\") " Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.507751 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d3a76d-4797-4b10-96fe-37a8791e675f-config-volume" (OuterVolumeSpecName: "config-volume") pod "76d3a76d-4797-4b10-96fe-37a8791e675f" (UID: "76d3a76d-4797-4b10-96fe-37a8791e675f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.512002 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/76d3a76d-4797-4b10-96fe-37a8791e675f-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.529653 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d3a76d-4797-4b10-96fe-37a8791e675f-kube-api-access-d8nhd" (OuterVolumeSpecName: "kube-api-access-d8nhd") pod "76d3a76d-4797-4b10-96fe-37a8791e675f" (UID: "76d3a76d-4797-4b10-96fe-37a8791e675f"). InnerVolumeSpecName "kube-api-access-d8nhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.530659 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d3a76d-4797-4b10-96fe-37a8791e675f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "76d3a76d-4797-4b10-96fe-37a8791e675f" (UID: "76d3a76d-4797-4b10-96fe-37a8791e675f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.614702 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/76d3a76d-4797-4b10-96fe-37a8791e675f-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.614764 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8nhd\" (UniqueName: \"kubernetes.io/projected/76d3a76d-4797-4b10-96fe-37a8791e675f-kube-api-access-d8nhd\") on node \"crc\" DevicePath \"\"" Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.997653 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" event={"ID":"76d3a76d-4797-4b10-96fe-37a8791e675f","Type":"ContainerDied","Data":"c4800b029316efb2532556bb03c3c85e04dbd6237df7c01f966f4e95be34488d"} Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.998065 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4800b029316efb2532556bb03c3c85e04dbd6237df7c01f966f4e95be34488d" Dec 11 10:45:03 crc kubenswrapper[4746]: I1211 10:45:03.998140 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424165-jjzv4" Dec 11 10:45:04 crc kubenswrapper[4746]: I1211 10:45:04.496606 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg"] Dec 11 10:45:04 crc kubenswrapper[4746]: I1211 10:45:04.505814 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424120-l55sg"] Dec 11 10:45:05 crc kubenswrapper[4746]: I1211 10:45:05.642948 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75cd9e37-60e9-42c2-9566-29af704fd01f" path="/var/lib/kubelet/pods/75cd9e37-60e9-42c2-9566-29af704fd01f/volumes" Dec 11 10:45:29 crc kubenswrapper[4746]: I1211 10:45:29.877346 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:45:29 crc kubenswrapper[4746]: I1211 10:45:29.877946 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:45:59 crc kubenswrapper[4746]: I1211 10:45:59.877470 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:45:59 crc kubenswrapper[4746]: I1211 10:45:59.878201 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:45:59 crc kubenswrapper[4746]: I1211 10:45:59.878251 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:45:59 crc kubenswrapper[4746]: I1211 10:45:59.879183 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:45:59 crc kubenswrapper[4746]: I1211 10:45:59.879239 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" gracePeriod=600 Dec 11 10:45:59 crc kubenswrapper[4746]: E1211 10:45:59.999908 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:46:00 crc kubenswrapper[4746]: I1211 10:46:00.527478 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" exitCode=0 Dec 11 10:46:00 crc kubenswrapper[4746]: I1211 10:46:00.527527 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61"} Dec 11 10:46:00 crc kubenswrapper[4746]: I1211 10:46:00.527560 4746 scope.go:117] "RemoveContainer" containerID="3ad3260619054a2be7ea78905c03fa7eb1a0df666f52e2085de9df171bfbe05b" Dec 11 10:46:00 crc kubenswrapper[4746]: I1211 10:46:00.528234 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:46:00 crc kubenswrapper[4746]: E1211 10:46:00.528474 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:46:04 crc kubenswrapper[4746]: I1211 10:46:04.919165 4746 scope.go:117] "RemoveContainer" containerID="0ab6992344d6860830874a03cb33b6644204477d33e050d995bd93c3667c2e41" Dec 11 10:46:10 crc kubenswrapper[4746]: I1211 10:46:10.630485 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:46:10 crc kubenswrapper[4746]: E1211 10:46:10.631285 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:46:25 crc kubenswrapper[4746]: I1211 10:46:25.631566 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:46:25 crc kubenswrapper[4746]: E1211 10:46:25.632897 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:46:36 crc kubenswrapper[4746]: I1211 10:46:36.631451 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:46:36 crc kubenswrapper[4746]: E1211 10:46:36.632207 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:46:50 crc kubenswrapper[4746]: I1211 10:46:50.631695 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:46:50 crc kubenswrapper[4746]: E1211 10:46:50.632622 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:47:04 crc kubenswrapper[4746]: I1211 10:47:04.631236 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:47:04 crc kubenswrapper[4746]: E1211 10:47:04.632379 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:47:04 crc kubenswrapper[4746]: I1211 10:47:04.985994 4746 scope.go:117] "RemoveContainer" containerID="ed5ef4b98f9432f614641854ed09a91955a624cec2515b6c93e1378ee08af2a4" Dec 11 10:47:05 crc kubenswrapper[4746]: I1211 10:47:05.032936 4746 scope.go:117] "RemoveContainer" containerID="f4e0c13f9d728daf7570fcf237dacc6d40dd4e7b8eca0e48abf3f3d3a1a94a48" Dec 11 10:47:05 crc kubenswrapper[4746]: I1211 10:47:05.068494 4746 scope.go:117] "RemoveContainer" containerID="83368c09c000adba9e22f391575934bf6b2c1fbacff34c3ff2a70fcd9a546717" Dec 11 10:47:18 crc kubenswrapper[4746]: I1211 10:47:18.630650 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:47:18 crc kubenswrapper[4746]: E1211 10:47:18.631652 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:47:33 crc kubenswrapper[4746]: I1211 10:47:33.630913 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:47:33 crc kubenswrapper[4746]: E1211 10:47:33.631739 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:47:47 crc kubenswrapper[4746]: I1211 10:47:47.638733 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:47:47 crc kubenswrapper[4746]: E1211 10:47:47.639605 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:47:58 crc kubenswrapper[4746]: I1211 10:47:58.630932 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:47:58 crc kubenswrapper[4746]: E1211 10:47:58.632393 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:48:09 crc kubenswrapper[4746]: I1211 10:48:09.629744 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:48:09 crc kubenswrapper[4746]: E1211 10:48:09.630587 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:48:21 crc kubenswrapper[4746]: I1211 10:48:21.630714 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:48:21 crc kubenswrapper[4746]: E1211 10:48:21.631575 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:48:32 crc kubenswrapper[4746]: I1211 10:48:32.631552 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:48:32 crc kubenswrapper[4746]: E1211 10:48:32.632776 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:48:45 crc kubenswrapper[4746]: I1211 10:48:45.633628 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:48:45 crc kubenswrapper[4746]: E1211 10:48:45.634694 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:48:56 crc kubenswrapper[4746]: I1211 10:48:56.705521 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:48:56 crc kubenswrapper[4746]: E1211 10:48:56.706585 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:49:07 crc kubenswrapper[4746]: I1211 10:49:07.637978 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:49:07 crc kubenswrapper[4746]: E1211 10:49:07.639031 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:49:19 crc kubenswrapper[4746]: I1211 10:49:19.630402 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:49:19 crc kubenswrapper[4746]: E1211 10:49:19.631367 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:49:33 crc kubenswrapper[4746]: I1211 10:49:33.631136 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:49:33 crc kubenswrapper[4746]: E1211 10:49:33.632070 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:49:48 crc kubenswrapper[4746]: I1211 10:49:48.630536 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:49:48 crc kubenswrapper[4746]: E1211 10:49:48.631508 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:50:00 crc kubenswrapper[4746]: I1211 10:50:00.630658 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:50:00 crc kubenswrapper[4746]: E1211 10:50:00.631516 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:50:06 crc kubenswrapper[4746]: I1211 10:50:06.872275 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-5c8g4" podUID="5d8442a7-c511-4f69-b04e-45e750f27bfa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:50:07 crc kubenswrapper[4746]: I1211 10:50:06.878783 4746 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-kklt5" podUID="798a9e32-0bc8-4231-834a-fc2b002c87aa" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 11 10:50:14 crc kubenswrapper[4746]: I1211 10:50:14.630647 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:50:14 crc kubenswrapper[4746]: E1211 10:50:14.631704 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:50:29 crc kubenswrapper[4746]: I1211 10:50:29.634626 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:50:29 crc kubenswrapper[4746]: E1211 10:50:29.635831 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:50:40 crc kubenswrapper[4746]: I1211 10:50:40.631225 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:50:40 crc kubenswrapper[4746]: E1211 10:50:40.632163 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:50:55 crc kubenswrapper[4746]: I1211 10:50:55.631551 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:50:55 crc kubenswrapper[4746]: E1211 10:50:55.632551 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:51:06 crc kubenswrapper[4746]: I1211 10:51:06.631264 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:51:07 crc kubenswrapper[4746]: I1211 10:51:07.648027 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"103e69231676c121cf31be18b3bc2feb4dfdceb796773042bf1afbc251fcee31"} Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.229034 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g62m6"] Dec 11 10:51:15 crc kubenswrapper[4746]: E1211 10:51:15.230150 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d3a76d-4797-4b10-96fe-37a8791e675f" containerName="collect-profiles" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.230165 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d3a76d-4797-4b10-96fe-37a8791e675f" containerName="collect-profiles" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.230543 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d3a76d-4797-4b10-96fe-37a8791e675f" containerName="collect-profiles" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.232183 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.255307 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g62m6"] Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.305897 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-catalog-content\") pod \"redhat-operators-g62m6\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.306104 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgzj8\" (UniqueName: \"kubernetes.io/projected/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-kube-api-access-tgzj8\") pod \"redhat-operators-g62m6\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.306804 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-utilities\") pod \"redhat-operators-g62m6\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.409468 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-utilities\") pod \"redhat-operators-g62m6\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.409665 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-catalog-content\") pod \"redhat-operators-g62m6\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.409716 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgzj8\" (UniqueName: \"kubernetes.io/projected/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-kube-api-access-tgzj8\") pod \"redhat-operators-g62m6\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.410423 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-utilities\") pod \"redhat-operators-g62m6\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.410555 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-catalog-content\") pod \"redhat-operators-g62m6\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.440639 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgzj8\" (UniqueName: \"kubernetes.io/projected/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-kube-api-access-tgzj8\") pod \"redhat-operators-g62m6\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:15 crc kubenswrapper[4746]: I1211 10:51:15.555175 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:16 crc kubenswrapper[4746]: I1211 10:51:16.118938 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g62m6"] Dec 11 10:51:16 crc kubenswrapper[4746]: I1211 10:51:16.742595 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62m6" event={"ID":"5c9e21bc-fe9f-475f-ad1d-9b32b37477af","Type":"ContainerStarted","Data":"01b704de56f8a35c1059c2a46b0e986cab741c12c1ebe5867f68332549493a0d"} Dec 11 10:51:17 crc kubenswrapper[4746]: I1211 10:51:17.754511 4746 generic.go:334] "Generic (PLEG): container finished" podID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerID="b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761" exitCode=0 Dec 11 10:51:17 crc kubenswrapper[4746]: I1211 10:51:17.754624 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62m6" event={"ID":"5c9e21bc-fe9f-475f-ad1d-9b32b37477af","Type":"ContainerDied","Data":"b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761"} Dec 11 10:51:17 crc kubenswrapper[4746]: I1211 10:51:17.762413 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 10:51:20 crc kubenswrapper[4746]: I1211 10:51:20.789289 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62m6" event={"ID":"5c9e21bc-fe9f-475f-ad1d-9b32b37477af","Type":"ContainerStarted","Data":"61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c"} Dec 11 10:51:23 crc kubenswrapper[4746]: I1211 10:51:23.828486 4746 generic.go:334] "Generic (PLEG): container finished" podID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerID="61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c" exitCode=0 Dec 11 10:51:23 crc kubenswrapper[4746]: I1211 10:51:23.828576 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62m6" event={"ID":"5c9e21bc-fe9f-475f-ad1d-9b32b37477af","Type":"ContainerDied","Data":"61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c"} Dec 11 10:51:24 crc kubenswrapper[4746]: I1211 10:51:24.842016 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62m6" event={"ID":"5c9e21bc-fe9f-475f-ad1d-9b32b37477af","Type":"ContainerStarted","Data":"0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087"} Dec 11 10:51:24 crc kubenswrapper[4746]: I1211 10:51:24.866821 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g62m6" podStartSLOduration=3.239170352 podStartE2EDuration="9.866798019s" podCreationTimestamp="2025-12-11 10:51:15 +0000 UTC" firstStartedPulling="2025-12-11 10:51:17.762120277 +0000 UTC m=+3450.621983590" lastFinishedPulling="2025-12-11 10:51:24.389747944 +0000 UTC m=+3457.249611257" observedRunningTime="2025-12-11 10:51:24.861038584 +0000 UTC m=+3457.720901907" watchObservedRunningTime="2025-12-11 10:51:24.866798019 +0000 UTC m=+3457.726661332" Dec 11 10:51:25 crc kubenswrapper[4746]: I1211 10:51:25.555625 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:25 crc kubenswrapper[4746]: I1211 10:51:25.556259 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:26 crc kubenswrapper[4746]: I1211 10:51:26.605282 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g62m6" podUID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerName="registry-server" probeResult="failure" output=< Dec 11 10:51:26 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Dec 11 10:51:26 crc kubenswrapper[4746]: > Dec 11 10:51:35 crc kubenswrapper[4746]: I1211 10:51:35.615014 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:35 crc kubenswrapper[4746]: I1211 10:51:35.668758 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:35 crc kubenswrapper[4746]: I1211 10:51:35.859490 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g62m6"] Dec 11 10:51:36 crc kubenswrapper[4746]: I1211 10:51:36.956501 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g62m6" podUID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerName="registry-server" containerID="cri-o://0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087" gracePeriod=2 Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.505859 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.608577 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-utilities\") pod \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.608666 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-catalog-content\") pod \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.608883 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgzj8\" (UniqueName: \"kubernetes.io/projected/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-kube-api-access-tgzj8\") pod \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\" (UID: \"5c9e21bc-fe9f-475f-ad1d-9b32b37477af\") " Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.609487 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-utilities" (OuterVolumeSpecName: "utilities") pod "5c9e21bc-fe9f-475f-ad1d-9b32b37477af" (UID: "5c9e21bc-fe9f-475f-ad1d-9b32b37477af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.618626 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-kube-api-access-tgzj8" (OuterVolumeSpecName: "kube-api-access-tgzj8") pod "5c9e21bc-fe9f-475f-ad1d-9b32b37477af" (UID: "5c9e21bc-fe9f-475f-ad1d-9b32b37477af"). InnerVolumeSpecName "kube-api-access-tgzj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.714253 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgzj8\" (UniqueName: \"kubernetes.io/projected/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-kube-api-access-tgzj8\") on node \"crc\" DevicePath \"\"" Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.714621 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.754501 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c9e21bc-fe9f-475f-ad1d-9b32b37477af" (UID: "5c9e21bc-fe9f-475f-ad1d-9b32b37477af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.816393 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c9e21bc-fe9f-475f-ad1d-9b32b37477af-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.967628 4746 generic.go:334] "Generic (PLEG): container finished" podID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerID="0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087" exitCode=0 Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.967679 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62m6" event={"ID":"5c9e21bc-fe9f-475f-ad1d-9b32b37477af","Type":"ContainerDied","Data":"0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087"} Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.967708 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g62m6" event={"ID":"5c9e21bc-fe9f-475f-ad1d-9b32b37477af","Type":"ContainerDied","Data":"01b704de56f8a35c1059c2a46b0e986cab741c12c1ebe5867f68332549493a0d"} Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.967727 4746 scope.go:117] "RemoveContainer" containerID="0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087" Dec 11 10:51:37 crc kubenswrapper[4746]: I1211 10:51:37.967915 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g62m6" Dec 11 10:51:38 crc kubenswrapper[4746]: I1211 10:51:38.009365 4746 scope.go:117] "RemoveContainer" containerID="61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c" Dec 11 10:51:38 crc kubenswrapper[4746]: I1211 10:51:38.015566 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g62m6"] Dec 11 10:51:38 crc kubenswrapper[4746]: I1211 10:51:38.029125 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g62m6"] Dec 11 10:51:38 crc kubenswrapper[4746]: I1211 10:51:38.044634 4746 scope.go:117] "RemoveContainer" containerID="b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761" Dec 11 10:51:38 crc kubenswrapper[4746]: I1211 10:51:38.084554 4746 scope.go:117] "RemoveContainer" containerID="0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087" Dec 11 10:51:38 crc kubenswrapper[4746]: E1211 10:51:38.085182 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087\": container with ID starting with 0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087 not found: ID does not exist" containerID="0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087" Dec 11 10:51:38 crc kubenswrapper[4746]: I1211 10:51:38.085229 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087"} err="failed to get container status \"0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087\": rpc error: code = NotFound desc = could not find container \"0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087\": container with ID starting with 0f00159069a56fb72027efe479a7191cfae1dbb654fbff0c2833e88ce4d31087 not found: ID does not exist" Dec 11 10:51:38 crc kubenswrapper[4746]: I1211 10:51:38.085267 4746 scope.go:117] "RemoveContainer" containerID="61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c" Dec 11 10:51:38 crc kubenswrapper[4746]: E1211 10:51:38.085594 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c\": container with ID starting with 61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c not found: ID does not exist" containerID="61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c" Dec 11 10:51:38 crc kubenswrapper[4746]: I1211 10:51:38.085629 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c"} err="failed to get container status \"61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c\": rpc error: code = NotFound desc = could not find container \"61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c\": container with ID starting with 61d53f58faf5a2fab36e2e18d3f503f70bc74c7a4a2ec9c6d154d2a93f72c19c not found: ID does not exist" Dec 11 10:51:38 crc kubenswrapper[4746]: I1211 10:51:38.085657 4746 scope.go:117] "RemoveContainer" containerID="b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761" Dec 11 10:51:38 crc kubenswrapper[4746]: E1211 10:51:38.086037 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761\": container with ID starting with b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761 not found: ID does not exist" containerID="b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761" Dec 11 10:51:38 crc kubenswrapper[4746]: I1211 10:51:38.086108 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761"} err="failed to get container status \"b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761\": rpc error: code = NotFound desc = could not find container \"b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761\": container with ID starting with b3844be28f105afab63f0d9323f6faa98a70011f816f7fd96f00525243a0e761 not found: ID does not exist" Dec 11 10:51:39 crc kubenswrapper[4746]: I1211 10:51:39.644083 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" path="/var/lib/kubelet/pods/5c9e21bc-fe9f-475f-ad1d-9b32b37477af/volumes" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.357129 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cjdw4"] Dec 11 10:52:06 crc kubenswrapper[4746]: E1211 10:52:06.358901 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerName="extract-utilities" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.358924 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerName="extract-utilities" Dec 11 10:52:06 crc kubenswrapper[4746]: E1211 10:52:06.358939 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerName="registry-server" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.358947 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerName="registry-server" Dec 11 10:52:06 crc kubenswrapper[4746]: E1211 10:52:06.358976 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerName="extract-content" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.358986 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerName="extract-content" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.359272 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9e21bc-fe9f-475f-ad1d-9b32b37477af" containerName="registry-server" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.361032 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.374590 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjdw4"] Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.452770 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nrd9\" (UniqueName: \"kubernetes.io/projected/4fdaec04-df55-4788-9409-f1f86bc04351-kube-api-access-9nrd9\") pod \"community-operators-cjdw4\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.452837 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-catalog-content\") pod \"community-operators-cjdw4\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.453356 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-utilities\") pod \"community-operators-cjdw4\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.555188 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nrd9\" (UniqueName: \"kubernetes.io/projected/4fdaec04-df55-4788-9409-f1f86bc04351-kube-api-access-9nrd9\") pod \"community-operators-cjdw4\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.555250 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-catalog-content\") pod \"community-operators-cjdw4\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.555376 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-utilities\") pod \"community-operators-cjdw4\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.556027 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-catalog-content\") pod \"community-operators-cjdw4\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.556093 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-utilities\") pod \"community-operators-cjdw4\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.595734 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nrd9\" (UniqueName: \"kubernetes.io/projected/4fdaec04-df55-4788-9409-f1f86bc04351-kube-api-access-9nrd9\") pod \"community-operators-cjdw4\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:06 crc kubenswrapper[4746]: I1211 10:52:06.698186 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:07 crc kubenswrapper[4746]: I1211 10:52:07.287188 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cjdw4"] Dec 11 10:52:08 crc kubenswrapper[4746]: I1211 10:52:08.296382 4746 generic.go:334] "Generic (PLEG): container finished" podID="4fdaec04-df55-4788-9409-f1f86bc04351" containerID="c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93" exitCode=0 Dec 11 10:52:08 crc kubenswrapper[4746]: I1211 10:52:08.296728 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjdw4" event={"ID":"4fdaec04-df55-4788-9409-f1f86bc04351","Type":"ContainerDied","Data":"c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93"} Dec 11 10:52:08 crc kubenswrapper[4746]: I1211 10:52:08.296764 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjdw4" event={"ID":"4fdaec04-df55-4788-9409-f1f86bc04351","Type":"ContainerStarted","Data":"0166f99340175ed971b3c35b01553ecd700f15f38c4f30232f1210f3f9a5d259"} Dec 11 10:52:10 crc kubenswrapper[4746]: I1211 10:52:10.317167 4746 generic.go:334] "Generic (PLEG): container finished" podID="4fdaec04-df55-4788-9409-f1f86bc04351" containerID="24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de" exitCode=0 Dec 11 10:52:10 crc kubenswrapper[4746]: I1211 10:52:10.317504 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjdw4" event={"ID":"4fdaec04-df55-4788-9409-f1f86bc04351","Type":"ContainerDied","Data":"24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de"} Dec 11 10:52:11 crc kubenswrapper[4746]: I1211 10:52:11.330612 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjdw4" event={"ID":"4fdaec04-df55-4788-9409-f1f86bc04351","Type":"ContainerStarted","Data":"53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4"} Dec 11 10:52:11 crc kubenswrapper[4746]: I1211 10:52:11.351835 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cjdw4" podStartSLOduration=2.852478631 podStartE2EDuration="5.35180727s" podCreationTimestamp="2025-12-11 10:52:06 +0000 UTC" firstStartedPulling="2025-12-11 10:52:08.29876373 +0000 UTC m=+3501.158627043" lastFinishedPulling="2025-12-11 10:52:10.798092369 +0000 UTC m=+3503.657955682" observedRunningTime="2025-12-11 10:52:11.350970928 +0000 UTC m=+3504.210834241" watchObservedRunningTime="2025-12-11 10:52:11.35180727 +0000 UTC m=+3504.211670583" Dec 11 10:52:16 crc kubenswrapper[4746]: I1211 10:52:16.698976 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:16 crc kubenswrapper[4746]: I1211 10:52:16.699565 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:16 crc kubenswrapper[4746]: I1211 10:52:16.752367 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:17 crc kubenswrapper[4746]: I1211 10:52:17.450091 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:17 crc kubenswrapper[4746]: I1211 10:52:17.508082 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cjdw4"] Dec 11 10:52:19 crc kubenswrapper[4746]: I1211 10:52:19.407159 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cjdw4" podUID="4fdaec04-df55-4788-9409-f1f86bc04351" containerName="registry-server" containerID="cri-o://53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4" gracePeriod=2 Dec 11 10:52:19 crc kubenswrapper[4746]: I1211 10:52:19.916019 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:19 crc kubenswrapper[4746]: I1211 10:52:19.992185 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-utilities\") pod \"4fdaec04-df55-4788-9409-f1f86bc04351\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " Dec 11 10:52:19 crc kubenswrapper[4746]: I1211 10:52:19.992244 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-catalog-content\") pod \"4fdaec04-df55-4788-9409-f1f86bc04351\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " Dec 11 10:52:19 crc kubenswrapper[4746]: I1211 10:52:19.992325 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nrd9\" (UniqueName: \"kubernetes.io/projected/4fdaec04-df55-4788-9409-f1f86bc04351-kube-api-access-9nrd9\") pod \"4fdaec04-df55-4788-9409-f1f86bc04351\" (UID: \"4fdaec04-df55-4788-9409-f1f86bc04351\") " Dec 11 10:52:19 crc kubenswrapper[4746]: I1211 10:52:19.993423 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-utilities" (OuterVolumeSpecName: "utilities") pod "4fdaec04-df55-4788-9409-f1f86bc04351" (UID: "4fdaec04-df55-4788-9409-f1f86bc04351"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:52:19 crc kubenswrapper[4746]: I1211 10:52:19.998912 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdaec04-df55-4788-9409-f1f86bc04351-kube-api-access-9nrd9" (OuterVolumeSpecName: "kube-api-access-9nrd9") pod "4fdaec04-df55-4788-9409-f1f86bc04351" (UID: "4fdaec04-df55-4788-9409-f1f86bc04351"). InnerVolumeSpecName "kube-api-access-9nrd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.094797 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.094829 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nrd9\" (UniqueName: \"kubernetes.io/projected/4fdaec04-df55-4788-9409-f1f86bc04351-kube-api-access-9nrd9\") on node \"crc\" DevicePath \"\"" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.351134 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fdaec04-df55-4788-9409-f1f86bc04351" (UID: "4fdaec04-df55-4788-9409-f1f86bc04351"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.400420 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fdaec04-df55-4788-9409-f1f86bc04351-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.417629 4746 generic.go:334] "Generic (PLEG): container finished" podID="4fdaec04-df55-4788-9409-f1f86bc04351" containerID="53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4" exitCode=0 Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.417690 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjdw4" event={"ID":"4fdaec04-df55-4788-9409-f1f86bc04351","Type":"ContainerDied","Data":"53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4"} Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.417713 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cjdw4" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.417737 4746 scope.go:117] "RemoveContainer" containerID="53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.417726 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cjdw4" event={"ID":"4fdaec04-df55-4788-9409-f1f86bc04351","Type":"ContainerDied","Data":"0166f99340175ed971b3c35b01553ecd700f15f38c4f30232f1210f3f9a5d259"} Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.439428 4746 scope.go:117] "RemoveContainer" containerID="24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.458116 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cjdw4"] Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.464260 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cjdw4"] Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.475677 4746 scope.go:117] "RemoveContainer" containerID="c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.523408 4746 scope.go:117] "RemoveContainer" containerID="53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4" Dec 11 10:52:20 crc kubenswrapper[4746]: E1211 10:52:20.526114 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4\": container with ID starting with 53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4 not found: ID does not exist" containerID="53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.526182 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4"} err="failed to get container status \"53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4\": rpc error: code = NotFound desc = could not find container \"53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4\": container with ID starting with 53238c9386bb34f54d416a70a6c1d9d1bff94c3862743db7aa9c9602a13186a4 not found: ID does not exist" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.526221 4746 scope.go:117] "RemoveContainer" containerID="24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de" Dec 11 10:52:20 crc kubenswrapper[4746]: E1211 10:52:20.526607 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de\": container with ID starting with 24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de not found: ID does not exist" containerID="24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.526649 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de"} err="failed to get container status \"24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de\": rpc error: code = NotFound desc = could not find container \"24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de\": container with ID starting with 24db252246a7ff86eeb7b62b2e19432cc6ad57576b3ad627f30178430e2750de not found: ID does not exist" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.526677 4746 scope.go:117] "RemoveContainer" containerID="c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93" Dec 11 10:52:20 crc kubenswrapper[4746]: E1211 10:52:20.527113 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93\": container with ID starting with c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93 not found: ID does not exist" containerID="c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93" Dec 11 10:52:20 crc kubenswrapper[4746]: I1211 10:52:20.527219 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93"} err="failed to get container status \"c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93\": rpc error: code = NotFound desc = could not find container \"c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93\": container with ID starting with c13a3c492588f5b7315133ab0dbf4142834e6c1448af485ed864540696429d93 not found: ID does not exist" Dec 11 10:52:21 crc kubenswrapper[4746]: I1211 10:52:21.642851 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fdaec04-df55-4788-9409-f1f86bc04351" path="/var/lib/kubelet/pods/4fdaec04-df55-4788-9409-f1f86bc04351/volumes" Dec 11 10:52:53 crc kubenswrapper[4746]: I1211 10:52:53.847307 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4dhw7"] Dec 11 10:52:53 crc kubenswrapper[4746]: E1211 10:52:53.848570 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdaec04-df55-4788-9409-f1f86bc04351" containerName="extract-content" Dec 11 10:52:53 crc kubenswrapper[4746]: I1211 10:52:53.848591 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdaec04-df55-4788-9409-f1f86bc04351" containerName="extract-content" Dec 11 10:52:53 crc kubenswrapper[4746]: E1211 10:52:53.848621 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdaec04-df55-4788-9409-f1f86bc04351" containerName="extract-utilities" Dec 11 10:52:53 crc kubenswrapper[4746]: I1211 10:52:53.848628 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdaec04-df55-4788-9409-f1f86bc04351" containerName="extract-utilities" Dec 11 10:52:53 crc kubenswrapper[4746]: E1211 10:52:53.848644 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdaec04-df55-4788-9409-f1f86bc04351" containerName="registry-server" Dec 11 10:52:53 crc kubenswrapper[4746]: I1211 10:52:53.848654 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdaec04-df55-4788-9409-f1f86bc04351" containerName="registry-server" Dec 11 10:52:53 crc kubenswrapper[4746]: I1211 10:52:53.848932 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fdaec04-df55-4788-9409-f1f86bc04351" containerName="registry-server" Dec 11 10:52:53 crc kubenswrapper[4746]: I1211 10:52:53.851412 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:53 crc kubenswrapper[4746]: I1211 10:52:53.861037 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dhw7"] Dec 11 10:52:53 crc kubenswrapper[4746]: I1211 10:52:53.960843 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-utilities\") pod \"redhat-marketplace-4dhw7\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:53 crc kubenswrapper[4746]: I1211 10:52:53.961138 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-catalog-content\") pod \"redhat-marketplace-4dhw7\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:53 crc kubenswrapper[4746]: I1211 10:52:53.961325 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbxm\" (UniqueName: \"kubernetes.io/projected/9a092d76-f341-411e-bfa9-886a4af55a01-kube-api-access-5qbxm\") pod \"redhat-marketplace-4dhw7\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:54 crc kubenswrapper[4746]: I1211 10:52:54.062771 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-utilities\") pod \"redhat-marketplace-4dhw7\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:54 crc kubenswrapper[4746]: I1211 10:52:54.063170 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-catalog-content\") pod \"redhat-marketplace-4dhw7\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:54 crc kubenswrapper[4746]: I1211 10:52:54.063224 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbxm\" (UniqueName: \"kubernetes.io/projected/9a092d76-f341-411e-bfa9-886a4af55a01-kube-api-access-5qbxm\") pod \"redhat-marketplace-4dhw7\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:54 crc kubenswrapper[4746]: I1211 10:52:54.063356 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-utilities\") pod \"redhat-marketplace-4dhw7\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:54 crc kubenswrapper[4746]: I1211 10:52:54.063742 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-catalog-content\") pod \"redhat-marketplace-4dhw7\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:54 crc kubenswrapper[4746]: I1211 10:52:54.086713 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbxm\" (UniqueName: \"kubernetes.io/projected/9a092d76-f341-411e-bfa9-886a4af55a01-kube-api-access-5qbxm\") pod \"redhat-marketplace-4dhw7\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:54 crc kubenswrapper[4746]: I1211 10:52:54.175445 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:52:54 crc kubenswrapper[4746]: I1211 10:52:54.682127 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dhw7"] Dec 11 10:52:54 crc kubenswrapper[4746]: W1211 10:52:54.688218 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9a092d76_f341_411e_bfa9_886a4af55a01.slice/crio-202eb5e6e3694c9a5b139ed2d24589ed553f82edb7bafaeeb8242a898e209d23 WatchSource:0}: Error finding container 202eb5e6e3694c9a5b139ed2d24589ed553f82edb7bafaeeb8242a898e209d23: Status 404 returned error can't find the container with id 202eb5e6e3694c9a5b139ed2d24589ed553f82edb7bafaeeb8242a898e209d23 Dec 11 10:52:54 crc kubenswrapper[4746]: I1211 10:52:54.721440 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dhw7" event={"ID":"9a092d76-f341-411e-bfa9-886a4af55a01","Type":"ContainerStarted","Data":"202eb5e6e3694c9a5b139ed2d24589ed553f82edb7bafaeeb8242a898e209d23"} Dec 11 10:52:55 crc kubenswrapper[4746]: I1211 10:52:55.731249 4746 generic.go:334] "Generic (PLEG): container finished" podID="9a092d76-f341-411e-bfa9-886a4af55a01" containerID="83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b" exitCode=0 Dec 11 10:52:55 crc kubenswrapper[4746]: I1211 10:52:55.731364 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dhw7" event={"ID":"9a092d76-f341-411e-bfa9-886a4af55a01","Type":"ContainerDied","Data":"83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b"} Dec 11 10:52:57 crc kubenswrapper[4746]: I1211 10:52:57.755794 4746 generic.go:334] "Generic (PLEG): container finished" podID="9a092d76-f341-411e-bfa9-886a4af55a01" containerID="9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433" exitCode=0 Dec 11 10:52:57 crc kubenswrapper[4746]: I1211 10:52:57.755921 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dhw7" event={"ID":"9a092d76-f341-411e-bfa9-886a4af55a01","Type":"ContainerDied","Data":"9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433"} Dec 11 10:52:58 crc kubenswrapper[4746]: I1211 10:52:58.770003 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dhw7" event={"ID":"9a092d76-f341-411e-bfa9-886a4af55a01","Type":"ContainerStarted","Data":"ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9"} Dec 11 10:52:58 crc kubenswrapper[4746]: I1211 10:52:58.807941 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4dhw7" podStartSLOduration=3.331796509 podStartE2EDuration="5.807917383s" podCreationTimestamp="2025-12-11 10:52:53 +0000 UTC" firstStartedPulling="2025-12-11 10:52:55.733456974 +0000 UTC m=+3548.593320277" lastFinishedPulling="2025-12-11 10:52:58.209577838 +0000 UTC m=+3551.069441151" observedRunningTime="2025-12-11 10:52:58.801780948 +0000 UTC m=+3551.661644271" watchObservedRunningTime="2025-12-11 10:52:58.807917383 +0000 UTC m=+3551.667780696" Dec 11 10:53:04 crc kubenswrapper[4746]: I1211 10:53:04.175934 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:53:04 crc kubenswrapper[4746]: I1211 10:53:04.176730 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:53:04 crc kubenswrapper[4746]: I1211 10:53:04.228628 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:53:04 crc kubenswrapper[4746]: I1211 10:53:04.903963 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:53:04 crc kubenswrapper[4746]: I1211 10:53:04.960244 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dhw7"] Dec 11 10:53:06 crc kubenswrapper[4746]: I1211 10:53:06.864371 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4dhw7" podUID="9a092d76-f341-411e-bfa9-886a4af55a01" containerName="registry-server" containerID="cri-o://ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9" gracePeriod=2 Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.460901 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.659064 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-catalog-content\") pod \"9a092d76-f341-411e-bfa9-886a4af55a01\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.659132 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-utilities\") pod \"9a092d76-f341-411e-bfa9-886a4af55a01\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.659228 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qbxm\" (UniqueName: \"kubernetes.io/projected/9a092d76-f341-411e-bfa9-886a4af55a01-kube-api-access-5qbxm\") pod \"9a092d76-f341-411e-bfa9-886a4af55a01\" (UID: \"9a092d76-f341-411e-bfa9-886a4af55a01\") " Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.660404 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-utilities" (OuterVolumeSpecName: "utilities") pod "9a092d76-f341-411e-bfa9-886a4af55a01" (UID: "9a092d76-f341-411e-bfa9-886a4af55a01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.666310 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a092d76-f341-411e-bfa9-886a4af55a01-kube-api-access-5qbxm" (OuterVolumeSpecName: "kube-api-access-5qbxm") pod "9a092d76-f341-411e-bfa9-886a4af55a01" (UID: "9a092d76-f341-411e-bfa9-886a4af55a01"). InnerVolumeSpecName "kube-api-access-5qbxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.685392 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a092d76-f341-411e-bfa9-886a4af55a01" (UID: "9a092d76-f341-411e-bfa9-886a4af55a01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.762347 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.762429 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a092d76-f341-411e-bfa9-886a4af55a01-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.762452 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qbxm\" (UniqueName: \"kubernetes.io/projected/9a092d76-f341-411e-bfa9-886a4af55a01-kube-api-access-5qbxm\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.891112 4746 generic.go:334] "Generic (PLEG): container finished" podID="9a092d76-f341-411e-bfa9-886a4af55a01" containerID="ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9" exitCode=0 Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.891165 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dhw7" event={"ID":"9a092d76-f341-411e-bfa9-886a4af55a01","Type":"ContainerDied","Data":"ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9"} Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.891196 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dhw7" event={"ID":"9a092d76-f341-411e-bfa9-886a4af55a01","Type":"ContainerDied","Data":"202eb5e6e3694c9a5b139ed2d24589ed553f82edb7bafaeeb8242a898e209d23"} Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.891215 4746 scope.go:117] "RemoveContainer" containerID="ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9" Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.891221 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dhw7" Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.939514 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dhw7"] Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.943287 4746 scope.go:117] "RemoveContainer" containerID="9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433" Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.954711 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dhw7"] Dec 11 10:53:07 crc kubenswrapper[4746]: I1211 10:53:07.973636 4746 scope.go:117] "RemoveContainer" containerID="83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b" Dec 11 10:53:08 crc kubenswrapper[4746]: I1211 10:53:08.019895 4746 scope.go:117] "RemoveContainer" containerID="ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9" Dec 11 10:53:08 crc kubenswrapper[4746]: E1211 10:53:08.020481 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9\": container with ID starting with ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9 not found: ID does not exist" containerID="ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9" Dec 11 10:53:08 crc kubenswrapper[4746]: I1211 10:53:08.020527 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9"} err="failed to get container status \"ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9\": rpc error: code = NotFound desc = could not find container \"ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9\": container with ID starting with ebd7b87e23aa8310334e9408f5c9d4b43414d36efc639b9607a2b1231ee08fb9 not found: ID does not exist" Dec 11 10:53:08 crc kubenswrapper[4746]: I1211 10:53:08.020556 4746 scope.go:117] "RemoveContainer" containerID="9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433" Dec 11 10:53:08 crc kubenswrapper[4746]: E1211 10:53:08.020962 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433\": container with ID starting with 9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433 not found: ID does not exist" containerID="9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433" Dec 11 10:53:08 crc kubenswrapper[4746]: I1211 10:53:08.020994 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433"} err="failed to get container status \"9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433\": rpc error: code = NotFound desc = could not find container \"9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433\": container with ID starting with 9b31dbf71436b3e1302dfd96022d48e2d381ab7d75ab352d5949a79528685433 not found: ID does not exist" Dec 11 10:53:08 crc kubenswrapper[4746]: I1211 10:53:08.021014 4746 scope.go:117] "RemoveContainer" containerID="83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b" Dec 11 10:53:08 crc kubenswrapper[4746]: E1211 10:53:08.021315 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b\": container with ID starting with 83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b not found: ID does not exist" containerID="83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b" Dec 11 10:53:08 crc kubenswrapper[4746]: I1211 10:53:08.021351 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b"} err="failed to get container status \"83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b\": rpc error: code = NotFound desc = could not find container \"83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b\": container with ID starting with 83a009be9480c7b98ebd010e55f8a9a34eea0b439c96d539f880b82b6c55a29b not found: ID does not exist" Dec 11 10:53:09 crc kubenswrapper[4746]: I1211 10:53:09.643496 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a092d76-f341-411e-bfa9-886a4af55a01" path="/var/lib/kubelet/pods/9a092d76-f341-411e-bfa9-886a4af55a01/volumes" Dec 11 10:53:14 crc kubenswrapper[4746]: I1211 10:53:14.956628 4746 generic.go:334] "Generic (PLEG): container finished" podID="dac76301-3400-4177-8a19-8b97a7480321" containerID="79cc54401a088d5633969999674d11fa594f30c622faf33e7b459a9278915f97" exitCode=0 Dec 11 10:53:14 crc kubenswrapper[4746]: I1211 10:53:14.956737 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dac76301-3400-4177-8a19-8b97a7480321","Type":"ContainerDied","Data":"79cc54401a088d5633969999674d11fa594f30c622faf33e7b459a9278915f97"} Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.385632 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.561877 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"dac76301-3400-4177-8a19-8b97a7480321\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.561964 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-config-data\") pod \"dac76301-3400-4177-8a19-8b97a7480321\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.562027 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-temporary\") pod \"dac76301-3400-4177-8a19-8b97a7480321\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.562087 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-openstack-config-secret\") pod \"dac76301-3400-4177-8a19-8b97a7480321\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.562114 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-openstack-config\") pod \"dac76301-3400-4177-8a19-8b97a7480321\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.562154 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ssh-key\") pod \"dac76301-3400-4177-8a19-8b97a7480321\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.562183 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-workdir\") pod \"dac76301-3400-4177-8a19-8b97a7480321\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.562294 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7qw8\" (UniqueName: \"kubernetes.io/projected/dac76301-3400-4177-8a19-8b97a7480321-kube-api-access-j7qw8\") pod \"dac76301-3400-4177-8a19-8b97a7480321\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.562367 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ca-certs\") pod \"dac76301-3400-4177-8a19-8b97a7480321\" (UID: \"dac76301-3400-4177-8a19-8b97a7480321\") " Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.562722 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "dac76301-3400-4177-8a19-8b97a7480321" (UID: "dac76301-3400-4177-8a19-8b97a7480321"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.562934 4746 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.563035 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-config-data" (OuterVolumeSpecName: "config-data") pod "dac76301-3400-4177-8a19-8b97a7480321" (UID: "dac76301-3400-4177-8a19-8b97a7480321"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.565146 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "dac76301-3400-4177-8a19-8b97a7480321" (UID: "dac76301-3400-4177-8a19-8b97a7480321"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.576901 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dac76301-3400-4177-8a19-8b97a7480321-kube-api-access-j7qw8" (OuterVolumeSpecName: "kube-api-access-j7qw8") pod "dac76301-3400-4177-8a19-8b97a7480321" (UID: "dac76301-3400-4177-8a19-8b97a7480321"). InnerVolumeSpecName "kube-api-access-j7qw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.603525 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "test-operator-logs") pod "dac76301-3400-4177-8a19-8b97a7480321" (UID: "dac76301-3400-4177-8a19-8b97a7480321"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.618801 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "dac76301-3400-4177-8a19-8b97a7480321" (UID: "dac76301-3400-4177-8a19-8b97a7480321"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.637480 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "dac76301-3400-4177-8a19-8b97a7480321" (UID: "dac76301-3400-4177-8a19-8b97a7480321"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.648854 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "dac76301-3400-4177-8a19-8b97a7480321" (UID: "dac76301-3400-4177-8a19-8b97a7480321"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.665927 4746 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.666010 4746 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/dac76301-3400-4177-8a19-8b97a7480321-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.666063 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7qw8\" (UniqueName: \"kubernetes.io/projected/dac76301-3400-4177-8a19-8b97a7480321-kube-api-access-j7qw8\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.666078 4746 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.666104 4746 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.666115 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.666125 4746 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dac76301-3400-4177-8a19-8b97a7480321-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.676100 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "dac76301-3400-4177-8a19-8b97a7480321" (UID: "dac76301-3400-4177-8a19-8b97a7480321"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.693712 4746 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.768619 4746 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.768665 4746 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dac76301-3400-4177-8a19-8b97a7480321-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.980803 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"dac76301-3400-4177-8a19-8b97a7480321","Type":"ContainerDied","Data":"ea5dd36a432eaa3689ff2563c926b1a24b8d78edc6e131a9aa7385d7e05699b7"} Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.980863 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea5dd36a432eaa3689ff2563c926b1a24b8d78edc6e131a9aa7385d7e05699b7" Dec 11 10:53:16 crc kubenswrapper[4746]: I1211 10:53:16.981241 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 11 10:53:18 crc kubenswrapper[4746]: I1211 10:53:18.908319 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wlbvz"] Dec 11 10:53:18 crc kubenswrapper[4746]: E1211 10:53:18.909344 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a092d76-f341-411e-bfa9-886a4af55a01" containerName="extract-utilities" Dec 11 10:53:18 crc kubenswrapper[4746]: I1211 10:53:18.909368 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a092d76-f341-411e-bfa9-886a4af55a01" containerName="extract-utilities" Dec 11 10:53:18 crc kubenswrapper[4746]: E1211 10:53:18.909389 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a092d76-f341-411e-bfa9-886a4af55a01" containerName="registry-server" Dec 11 10:53:18 crc kubenswrapper[4746]: I1211 10:53:18.909395 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a092d76-f341-411e-bfa9-886a4af55a01" containerName="registry-server" Dec 11 10:53:18 crc kubenswrapper[4746]: E1211 10:53:18.909420 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a092d76-f341-411e-bfa9-886a4af55a01" containerName="extract-content" Dec 11 10:53:18 crc kubenswrapper[4746]: I1211 10:53:18.909428 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a092d76-f341-411e-bfa9-886a4af55a01" containerName="extract-content" Dec 11 10:53:18 crc kubenswrapper[4746]: E1211 10:53:18.909446 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dac76301-3400-4177-8a19-8b97a7480321" containerName="tempest-tests-tempest-tests-runner" Dec 11 10:53:18 crc kubenswrapper[4746]: I1211 10:53:18.909453 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="dac76301-3400-4177-8a19-8b97a7480321" containerName="tempest-tests-tempest-tests-runner" Dec 11 10:53:18 crc kubenswrapper[4746]: I1211 10:53:18.909664 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a092d76-f341-411e-bfa9-886a4af55a01" containerName="registry-server" Dec 11 10:53:18 crc kubenswrapper[4746]: I1211 10:53:18.909682 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="dac76301-3400-4177-8a19-8b97a7480321" containerName="tempest-tests-tempest-tests-runner" Dec 11 10:53:18 crc kubenswrapper[4746]: I1211 10:53:18.911616 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:18 crc kubenswrapper[4746]: I1211 10:53:18.923894 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wlbvz"] Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.015545 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-catalog-content\") pod \"certified-operators-wlbvz\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.015638 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-utilities\") pod \"certified-operators-wlbvz\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.016078 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkcll\" (UniqueName: \"kubernetes.io/projected/169b22bf-222f-4b68-8c99-72f7108f359b-kube-api-access-gkcll\") pod \"certified-operators-wlbvz\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.120761 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-catalog-content\") pod \"certified-operators-wlbvz\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.120844 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-utilities\") pod \"certified-operators-wlbvz\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.120892 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkcll\" (UniqueName: \"kubernetes.io/projected/169b22bf-222f-4b68-8c99-72f7108f359b-kube-api-access-gkcll\") pod \"certified-operators-wlbvz\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.121396 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-utilities\") pod \"certified-operators-wlbvz\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.121389 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-catalog-content\") pod \"certified-operators-wlbvz\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.143107 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkcll\" (UniqueName: \"kubernetes.io/projected/169b22bf-222f-4b68-8c99-72f7108f359b-kube-api-access-gkcll\") pod \"certified-operators-wlbvz\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.246575 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:19 crc kubenswrapper[4746]: I1211 10:53:19.812727 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wlbvz"] Dec 11 10:53:19 crc kubenswrapper[4746]: W1211 10:53:19.835208 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod169b22bf_222f_4b68_8c99_72f7108f359b.slice/crio-a3287dee4d8f15742c00f4cffe5f8154c46f4b0734a23e0f280b8a3f18872cb4 WatchSource:0}: Error finding container a3287dee4d8f15742c00f4cffe5f8154c46f4b0734a23e0f280b8a3f18872cb4: Status 404 returned error can't find the container with id a3287dee4d8f15742c00f4cffe5f8154c46f4b0734a23e0f280b8a3f18872cb4 Dec 11 10:53:20 crc kubenswrapper[4746]: I1211 10:53:20.036309 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbvz" event={"ID":"169b22bf-222f-4b68-8c99-72f7108f359b","Type":"ContainerStarted","Data":"a3287dee4d8f15742c00f4cffe5f8154c46f4b0734a23e0f280b8a3f18872cb4"} Dec 11 10:53:21 crc kubenswrapper[4746]: I1211 10:53:21.048872 4746 generic.go:334] "Generic (PLEG): container finished" podID="169b22bf-222f-4b68-8c99-72f7108f359b" containerID="fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29" exitCode=0 Dec 11 10:53:21 crc kubenswrapper[4746]: I1211 10:53:21.048986 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbvz" event={"ID":"169b22bf-222f-4b68-8c99-72f7108f359b","Type":"ContainerDied","Data":"fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29"} Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.070324 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbvz" event={"ID":"169b22bf-222f-4b68-8c99-72f7108f359b","Type":"ContainerStarted","Data":"5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50"} Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.245898 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.247820 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.250284 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-kh2nr" Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.255852 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.417463 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scgmv\" (UniqueName: \"kubernetes.io/projected/c7853e46-c0f0-403f-b095-fc60d413a35f-kube-api-access-scgmv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c7853e46-c0f0-403f-b095-fc60d413a35f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.417599 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c7853e46-c0f0-403f-b095-fc60d413a35f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.519431 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scgmv\" (UniqueName: \"kubernetes.io/projected/c7853e46-c0f0-403f-b095-fc60d413a35f-kube-api-access-scgmv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c7853e46-c0f0-403f-b095-fc60d413a35f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.519525 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c7853e46-c0f0-403f-b095-fc60d413a35f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.519942 4746 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c7853e46-c0f0-403f-b095-fc60d413a35f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.546190 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scgmv\" (UniqueName: \"kubernetes.io/projected/c7853e46-c0f0-403f-b095-fc60d413a35f-kube-api-access-scgmv\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c7853e46-c0f0-403f-b095-fc60d413a35f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.550479 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"c7853e46-c0f0-403f-b095-fc60d413a35f\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 10:53:23 crc kubenswrapper[4746]: I1211 10:53:23.586079 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 11 10:53:24 crc kubenswrapper[4746]: I1211 10:53:24.054218 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 11 10:53:24 crc kubenswrapper[4746]: I1211 10:53:24.090430 4746 generic.go:334] "Generic (PLEG): container finished" podID="169b22bf-222f-4b68-8c99-72f7108f359b" containerID="5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50" exitCode=0 Dec 11 10:53:24 crc kubenswrapper[4746]: I1211 10:53:24.090521 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbvz" event={"ID":"169b22bf-222f-4b68-8c99-72f7108f359b","Type":"ContainerDied","Data":"5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50"} Dec 11 10:53:24 crc kubenswrapper[4746]: W1211 10:53:24.095721 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7853e46_c0f0_403f_b095_fc60d413a35f.slice/crio-43cd87ace3c357ebe17ad455ea7b0d0b15b109c0407d8c5c98d4d407fcd36b68 WatchSource:0}: Error finding container 43cd87ace3c357ebe17ad455ea7b0d0b15b109c0407d8c5c98d4d407fcd36b68: Status 404 returned error can't find the container with id 43cd87ace3c357ebe17ad455ea7b0d0b15b109c0407d8c5c98d4d407fcd36b68 Dec 11 10:53:25 crc kubenswrapper[4746]: I1211 10:53:25.102364 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbvz" event={"ID":"169b22bf-222f-4b68-8c99-72f7108f359b","Type":"ContainerStarted","Data":"236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254"} Dec 11 10:53:25 crc kubenswrapper[4746]: I1211 10:53:25.104221 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c7853e46-c0f0-403f-b095-fc60d413a35f","Type":"ContainerStarted","Data":"43cd87ace3c357ebe17ad455ea7b0d0b15b109c0407d8c5c98d4d407fcd36b68"} Dec 11 10:53:25 crc kubenswrapper[4746]: I1211 10:53:25.123924 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wlbvz" podStartSLOduration=3.5844373689999998 podStartE2EDuration="7.123899298s" podCreationTimestamp="2025-12-11 10:53:18 +0000 UTC" firstStartedPulling="2025-12-11 10:53:21.051834166 +0000 UTC m=+3573.911697519" lastFinishedPulling="2025-12-11 10:53:24.591296135 +0000 UTC m=+3577.451159448" observedRunningTime="2025-12-11 10:53:25.119032177 +0000 UTC m=+3577.978895510" watchObservedRunningTime="2025-12-11 10:53:25.123899298 +0000 UTC m=+3577.983762601" Dec 11 10:53:27 crc kubenswrapper[4746]: I1211 10:53:27.128285 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"c7853e46-c0f0-403f-b095-fc60d413a35f","Type":"ContainerStarted","Data":"66e3a7c995c8e4e13687d0063e70acc2c4218df48fb12411e6922477e2b4f195"} Dec 11 10:53:27 crc kubenswrapper[4746]: I1211 10:53:27.168669 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.154133513 podStartE2EDuration="4.168635129s" podCreationTimestamp="2025-12-11 10:53:23 +0000 UTC" firstStartedPulling="2025-12-11 10:53:24.099693108 +0000 UTC m=+3576.959556421" lastFinishedPulling="2025-12-11 10:53:26.114194724 +0000 UTC m=+3578.974058037" observedRunningTime="2025-12-11 10:53:27.14942423 +0000 UTC m=+3580.009287543" watchObservedRunningTime="2025-12-11 10:53:27.168635129 +0000 UTC m=+3580.028498442" Dec 11 10:53:29 crc kubenswrapper[4746]: I1211 10:53:29.247547 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:29 crc kubenswrapper[4746]: I1211 10:53:29.248623 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:29 crc kubenswrapper[4746]: I1211 10:53:29.307529 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:29 crc kubenswrapper[4746]: I1211 10:53:29.878144 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:53:29 crc kubenswrapper[4746]: I1211 10:53:29.878218 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:53:30 crc kubenswrapper[4746]: I1211 10:53:30.213742 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:30 crc kubenswrapper[4746]: I1211 10:53:30.293560 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wlbvz"] Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.187028 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wlbvz" podUID="169b22bf-222f-4b68-8c99-72f7108f359b" containerName="registry-server" containerID="cri-o://236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254" gracePeriod=2 Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.699328 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.890371 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-utilities\") pod \"169b22bf-222f-4b68-8c99-72f7108f359b\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.890478 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-catalog-content\") pod \"169b22bf-222f-4b68-8c99-72f7108f359b\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.890645 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkcll\" (UniqueName: \"kubernetes.io/projected/169b22bf-222f-4b68-8c99-72f7108f359b-kube-api-access-gkcll\") pod \"169b22bf-222f-4b68-8c99-72f7108f359b\" (UID: \"169b22bf-222f-4b68-8c99-72f7108f359b\") " Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.891345 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-utilities" (OuterVolumeSpecName: "utilities") pod "169b22bf-222f-4b68-8c99-72f7108f359b" (UID: "169b22bf-222f-4b68-8c99-72f7108f359b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.899382 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169b22bf-222f-4b68-8c99-72f7108f359b-kube-api-access-gkcll" (OuterVolumeSpecName: "kube-api-access-gkcll") pod "169b22bf-222f-4b68-8c99-72f7108f359b" (UID: "169b22bf-222f-4b68-8c99-72f7108f359b"). InnerVolumeSpecName "kube-api-access-gkcll". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.946137 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "169b22bf-222f-4b68-8c99-72f7108f359b" (UID: "169b22bf-222f-4b68-8c99-72f7108f359b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.994011 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.994083 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/169b22bf-222f-4b68-8c99-72f7108f359b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:32 crc kubenswrapper[4746]: I1211 10:53:32.994101 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkcll\" (UniqueName: \"kubernetes.io/projected/169b22bf-222f-4b68-8c99-72f7108f359b-kube-api-access-gkcll\") on node \"crc\" DevicePath \"\"" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.197954 4746 generic.go:334] "Generic (PLEG): container finished" podID="169b22bf-222f-4b68-8c99-72f7108f359b" containerID="236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254" exitCode=0 Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.198002 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbvz" event={"ID":"169b22bf-222f-4b68-8c99-72f7108f359b","Type":"ContainerDied","Data":"236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254"} Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.198014 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wlbvz" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.198032 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wlbvz" event={"ID":"169b22bf-222f-4b68-8c99-72f7108f359b","Type":"ContainerDied","Data":"a3287dee4d8f15742c00f4cffe5f8154c46f4b0734a23e0f280b8a3f18872cb4"} Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.198071 4746 scope.go:117] "RemoveContainer" containerID="236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.235163 4746 scope.go:117] "RemoveContainer" containerID="5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.241386 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wlbvz"] Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.252463 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wlbvz"] Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.257604 4746 scope.go:117] "RemoveContainer" containerID="fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.315321 4746 scope.go:117] "RemoveContainer" containerID="236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254" Dec 11 10:53:33 crc kubenswrapper[4746]: E1211 10:53:33.316305 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254\": container with ID starting with 236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254 not found: ID does not exist" containerID="236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.316409 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254"} err="failed to get container status \"236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254\": rpc error: code = NotFound desc = could not find container \"236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254\": container with ID starting with 236993a49173db8b2826e612c0ed6ffee3123d4f837d769d027b55da4cd1e254 not found: ID does not exist" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.316437 4746 scope.go:117] "RemoveContainer" containerID="5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50" Dec 11 10:53:33 crc kubenswrapper[4746]: E1211 10:53:33.316994 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50\": container with ID starting with 5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50 not found: ID does not exist" containerID="5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.317041 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50"} err="failed to get container status \"5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50\": rpc error: code = NotFound desc = could not find container \"5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50\": container with ID starting with 5eff4e7c55e2ac4358cf773bf57539126b11a4be0ba1a9b20f38713f53c8ed50 not found: ID does not exist" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.317091 4746 scope.go:117] "RemoveContainer" containerID="fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29" Dec 11 10:53:33 crc kubenswrapper[4746]: E1211 10:53:33.317435 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29\": container with ID starting with fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29 not found: ID does not exist" containerID="fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.317470 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29"} err="failed to get container status \"fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29\": rpc error: code = NotFound desc = could not find container \"fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29\": container with ID starting with fe221e45832ed53be31d6fccbedd7370f4d2f6baae43a72ef01435eb65c2bb29 not found: ID does not exist" Dec 11 10:53:33 crc kubenswrapper[4746]: I1211 10:53:33.641466 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="169b22bf-222f-4b68-8c99-72f7108f359b" path="/var/lib/kubelet/pods/169b22bf-222f-4b68-8c99-72f7108f359b/volumes" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.258538 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jttbd/must-gather-lqxkr"] Dec 11 10:53:49 crc kubenswrapper[4746]: E1211 10:53:49.259720 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169b22bf-222f-4b68-8c99-72f7108f359b" containerName="registry-server" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.259736 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="169b22bf-222f-4b68-8c99-72f7108f359b" containerName="registry-server" Dec 11 10:53:49 crc kubenswrapper[4746]: E1211 10:53:49.259764 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169b22bf-222f-4b68-8c99-72f7108f359b" containerName="extract-content" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.259770 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="169b22bf-222f-4b68-8c99-72f7108f359b" containerName="extract-content" Dec 11 10:53:49 crc kubenswrapper[4746]: E1211 10:53:49.259803 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169b22bf-222f-4b68-8c99-72f7108f359b" containerName="extract-utilities" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.259814 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="169b22bf-222f-4b68-8c99-72f7108f359b" containerName="extract-utilities" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.260029 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="169b22bf-222f-4b68-8c99-72f7108f359b" containerName="registry-server" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.261237 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/must-gather-lqxkr" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.267749 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-jttbd"/"default-dockercfg-jvg8h" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.267825 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jttbd"/"openshift-service-ca.crt" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.268009 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jttbd"/"kube-root-ca.crt" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.277468 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jttbd/must-gather-lqxkr"] Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.299011 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b749c961-8ad9-411a-90d0-f1294a614816-must-gather-output\") pod \"must-gather-lqxkr\" (UID: \"b749c961-8ad9-411a-90d0-f1294a614816\") " pod="openshift-must-gather-jttbd/must-gather-lqxkr" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.299231 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxbxx\" (UniqueName: \"kubernetes.io/projected/b749c961-8ad9-411a-90d0-f1294a614816-kube-api-access-sxbxx\") pod \"must-gather-lqxkr\" (UID: \"b749c961-8ad9-411a-90d0-f1294a614816\") " pod="openshift-must-gather-jttbd/must-gather-lqxkr" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.401199 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxbxx\" (UniqueName: \"kubernetes.io/projected/b749c961-8ad9-411a-90d0-f1294a614816-kube-api-access-sxbxx\") pod \"must-gather-lqxkr\" (UID: \"b749c961-8ad9-411a-90d0-f1294a614816\") " pod="openshift-must-gather-jttbd/must-gather-lqxkr" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.401366 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b749c961-8ad9-411a-90d0-f1294a614816-must-gather-output\") pod \"must-gather-lqxkr\" (UID: \"b749c961-8ad9-411a-90d0-f1294a614816\") " pod="openshift-must-gather-jttbd/must-gather-lqxkr" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.401977 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b749c961-8ad9-411a-90d0-f1294a614816-must-gather-output\") pod \"must-gather-lqxkr\" (UID: \"b749c961-8ad9-411a-90d0-f1294a614816\") " pod="openshift-must-gather-jttbd/must-gather-lqxkr" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.425717 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxbxx\" (UniqueName: \"kubernetes.io/projected/b749c961-8ad9-411a-90d0-f1294a614816-kube-api-access-sxbxx\") pod \"must-gather-lqxkr\" (UID: \"b749c961-8ad9-411a-90d0-f1294a614816\") " pod="openshift-must-gather-jttbd/must-gather-lqxkr" Dec 11 10:53:49 crc kubenswrapper[4746]: I1211 10:53:49.582343 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/must-gather-lqxkr" Dec 11 10:53:50 crc kubenswrapper[4746]: I1211 10:53:50.088893 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jttbd/must-gather-lqxkr"] Dec 11 10:53:50 crc kubenswrapper[4746]: I1211 10:53:50.368270 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/must-gather-lqxkr" event={"ID":"b749c961-8ad9-411a-90d0-f1294a614816","Type":"ContainerStarted","Data":"c052e1f207163e9618fe2f099a72e5adf9c2075fdc3d4d576fc066c1fcceaebc"} Dec 11 10:53:57 crc kubenswrapper[4746]: I1211 10:53:57.465670 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/must-gather-lqxkr" event={"ID":"b749c961-8ad9-411a-90d0-f1294a614816","Type":"ContainerStarted","Data":"607f3e05925fad47a871ec76ae03de5be2bed605c05bbf880b2ff9d00ab1bd49"} Dec 11 10:53:57 crc kubenswrapper[4746]: I1211 10:53:57.466714 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/must-gather-lqxkr" event={"ID":"b749c961-8ad9-411a-90d0-f1294a614816","Type":"ContainerStarted","Data":"951d1f6138267f03ff5f29de2cf269ead8eb85f0150a7df2d8b2c0f18c8af84b"} Dec 11 10:53:57 crc kubenswrapper[4746]: I1211 10:53:57.488317 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jttbd/must-gather-lqxkr" podStartSLOduration=2.12465402 podStartE2EDuration="8.488295228s" podCreationTimestamp="2025-12-11 10:53:49 +0000 UTC" firstStartedPulling="2025-12-11 10:53:50.100251415 +0000 UTC m=+3602.960114728" lastFinishedPulling="2025-12-11 10:53:56.463892623 +0000 UTC m=+3609.323755936" observedRunningTime="2025-12-11 10:53:57.481609529 +0000 UTC m=+3610.341472842" watchObservedRunningTime="2025-12-11 10:53:57.488295228 +0000 UTC m=+3610.348158541" Dec 11 10:53:59 crc kubenswrapper[4746]: I1211 10:53:59.877012 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:53:59 crc kubenswrapper[4746]: I1211 10:53:59.877526 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:54:01 crc kubenswrapper[4746]: I1211 10:54:01.021474 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jttbd/crc-debug-x6mb2"] Dec 11 10:54:01 crc kubenswrapper[4746]: I1211 10:54:01.023556 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-x6mb2" Dec 11 10:54:01 crc kubenswrapper[4746]: I1211 10:54:01.164176 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46njc\" (UniqueName: \"kubernetes.io/projected/0958d458-17b0-494f-b47f-9e053ee1e665-kube-api-access-46njc\") pod \"crc-debug-x6mb2\" (UID: \"0958d458-17b0-494f-b47f-9e053ee1e665\") " pod="openshift-must-gather-jttbd/crc-debug-x6mb2" Dec 11 10:54:01 crc kubenswrapper[4746]: I1211 10:54:01.164484 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0958d458-17b0-494f-b47f-9e053ee1e665-host\") pod \"crc-debug-x6mb2\" (UID: \"0958d458-17b0-494f-b47f-9e053ee1e665\") " pod="openshift-must-gather-jttbd/crc-debug-x6mb2" Dec 11 10:54:01 crc kubenswrapper[4746]: I1211 10:54:01.267141 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46njc\" (UniqueName: \"kubernetes.io/projected/0958d458-17b0-494f-b47f-9e053ee1e665-kube-api-access-46njc\") pod \"crc-debug-x6mb2\" (UID: \"0958d458-17b0-494f-b47f-9e053ee1e665\") " pod="openshift-must-gather-jttbd/crc-debug-x6mb2" Dec 11 10:54:01 crc kubenswrapper[4746]: I1211 10:54:01.267253 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0958d458-17b0-494f-b47f-9e053ee1e665-host\") pod \"crc-debug-x6mb2\" (UID: \"0958d458-17b0-494f-b47f-9e053ee1e665\") " pod="openshift-must-gather-jttbd/crc-debug-x6mb2" Dec 11 10:54:01 crc kubenswrapper[4746]: I1211 10:54:01.267370 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0958d458-17b0-494f-b47f-9e053ee1e665-host\") pod \"crc-debug-x6mb2\" (UID: \"0958d458-17b0-494f-b47f-9e053ee1e665\") " pod="openshift-must-gather-jttbd/crc-debug-x6mb2" Dec 11 10:54:01 crc kubenswrapper[4746]: I1211 10:54:01.290289 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46njc\" (UniqueName: \"kubernetes.io/projected/0958d458-17b0-494f-b47f-9e053ee1e665-kube-api-access-46njc\") pod \"crc-debug-x6mb2\" (UID: \"0958d458-17b0-494f-b47f-9e053ee1e665\") " pod="openshift-must-gather-jttbd/crc-debug-x6mb2" Dec 11 10:54:01 crc kubenswrapper[4746]: I1211 10:54:01.351927 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-x6mb2" Dec 11 10:54:01 crc kubenswrapper[4746]: I1211 10:54:01.504990 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/crc-debug-x6mb2" event={"ID":"0958d458-17b0-494f-b47f-9e053ee1e665","Type":"ContainerStarted","Data":"b33c9f9ef389a49c76776e5e2f5c543db0b8573d0167e9ec245f66f1a8014a90"} Dec 11 10:54:15 crc kubenswrapper[4746]: I1211 10:54:15.674933 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/crc-debug-x6mb2" event={"ID":"0958d458-17b0-494f-b47f-9e053ee1e665","Type":"ContainerStarted","Data":"72f84c51c46ff0a1bfd09bc7d84b4009573e1c8e0d2421a33ba1a7010faddc14"} Dec 11 10:54:15 crc kubenswrapper[4746]: I1211 10:54:15.692133 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jttbd/crc-debug-x6mb2" podStartSLOduration=1.609566344 podStartE2EDuration="14.692111821s" podCreationTimestamp="2025-12-11 10:54:01 +0000 UTC" firstStartedPulling="2025-12-11 10:54:01.383392318 +0000 UTC m=+3614.243255631" lastFinishedPulling="2025-12-11 10:54:14.465937795 +0000 UTC m=+3627.325801108" observedRunningTime="2025-12-11 10:54:15.68947175 +0000 UTC m=+3628.549335063" watchObservedRunningTime="2025-12-11 10:54:15.692111821 +0000 UTC m=+3628.551975134" Dec 11 10:54:29 crc kubenswrapper[4746]: I1211 10:54:29.877848 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:54:29 crc kubenswrapper[4746]: I1211 10:54:29.878518 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:54:29 crc kubenswrapper[4746]: I1211 10:54:29.878581 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:54:29 crc kubenswrapper[4746]: I1211 10:54:29.879540 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"103e69231676c121cf31be18b3bc2feb4dfdceb796773042bf1afbc251fcee31"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:54:29 crc kubenswrapper[4746]: I1211 10:54:29.879597 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://103e69231676c121cf31be18b3bc2feb4dfdceb796773042bf1afbc251fcee31" gracePeriod=600 Dec 11 10:54:30 crc kubenswrapper[4746]: I1211 10:54:30.912451 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="103e69231676c121cf31be18b3bc2feb4dfdceb796773042bf1afbc251fcee31" exitCode=0 Dec 11 10:54:30 crc kubenswrapper[4746]: I1211 10:54:30.912545 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"103e69231676c121cf31be18b3bc2feb4dfdceb796773042bf1afbc251fcee31"} Dec 11 10:54:30 crc kubenswrapper[4746]: I1211 10:54:30.913073 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba"} Dec 11 10:54:30 crc kubenswrapper[4746]: I1211 10:54:30.913100 4746 scope.go:117] "RemoveContainer" containerID="0f4a3ca94f4e89e66f6f038aad103c7b89814a48aa47f1a49a725450806aeb61" Dec 11 10:55:04 crc kubenswrapper[4746]: I1211 10:55:04.244838 4746 generic.go:334] "Generic (PLEG): container finished" podID="0958d458-17b0-494f-b47f-9e053ee1e665" containerID="72f84c51c46ff0a1bfd09bc7d84b4009573e1c8e0d2421a33ba1a7010faddc14" exitCode=0 Dec 11 10:55:04 crc kubenswrapper[4746]: I1211 10:55:04.244940 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/crc-debug-x6mb2" event={"ID":"0958d458-17b0-494f-b47f-9e053ee1e665","Type":"ContainerDied","Data":"72f84c51c46ff0a1bfd09bc7d84b4009573e1c8e0d2421a33ba1a7010faddc14"} Dec 11 10:55:05 crc kubenswrapper[4746]: I1211 10:55:05.390032 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-x6mb2" Dec 11 10:55:05 crc kubenswrapper[4746]: I1211 10:55:05.448770 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jttbd/crc-debug-x6mb2"] Dec 11 10:55:05 crc kubenswrapper[4746]: I1211 10:55:05.458581 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jttbd/crc-debug-x6mb2"] Dec 11 10:55:05 crc kubenswrapper[4746]: I1211 10:55:05.571991 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46njc\" (UniqueName: \"kubernetes.io/projected/0958d458-17b0-494f-b47f-9e053ee1e665-kube-api-access-46njc\") pod \"0958d458-17b0-494f-b47f-9e053ee1e665\" (UID: \"0958d458-17b0-494f-b47f-9e053ee1e665\") " Dec 11 10:55:05 crc kubenswrapper[4746]: I1211 10:55:05.572126 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0958d458-17b0-494f-b47f-9e053ee1e665-host\") pod \"0958d458-17b0-494f-b47f-9e053ee1e665\" (UID: \"0958d458-17b0-494f-b47f-9e053ee1e665\") " Dec 11 10:55:05 crc kubenswrapper[4746]: I1211 10:55:05.572241 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0958d458-17b0-494f-b47f-9e053ee1e665-host" (OuterVolumeSpecName: "host") pod "0958d458-17b0-494f-b47f-9e053ee1e665" (UID: "0958d458-17b0-494f-b47f-9e053ee1e665"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:55:05 crc kubenswrapper[4746]: I1211 10:55:05.572906 4746 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0958d458-17b0-494f-b47f-9e053ee1e665-host\") on node \"crc\" DevicePath \"\"" Dec 11 10:55:05 crc kubenswrapper[4746]: I1211 10:55:05.578873 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0958d458-17b0-494f-b47f-9e053ee1e665-kube-api-access-46njc" (OuterVolumeSpecName: "kube-api-access-46njc") pod "0958d458-17b0-494f-b47f-9e053ee1e665" (UID: "0958d458-17b0-494f-b47f-9e053ee1e665"). InnerVolumeSpecName "kube-api-access-46njc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:55:05 crc kubenswrapper[4746]: I1211 10:55:05.642916 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0958d458-17b0-494f-b47f-9e053ee1e665" path="/var/lib/kubelet/pods/0958d458-17b0-494f-b47f-9e053ee1e665/volumes" Dec 11 10:55:05 crc kubenswrapper[4746]: I1211 10:55:05.674653 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46njc\" (UniqueName: \"kubernetes.io/projected/0958d458-17b0-494f-b47f-9e053ee1e665-kube-api-access-46njc\") on node \"crc\" DevicePath \"\"" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.269727 4746 scope.go:117] "RemoveContainer" containerID="72f84c51c46ff0a1bfd09bc7d84b4009573e1c8e0d2421a33ba1a7010faddc14" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.269889 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-x6mb2" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.701140 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jttbd/crc-debug-7vflh"] Dec 11 10:55:06 crc kubenswrapper[4746]: E1211 10:55:06.702940 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0958d458-17b0-494f-b47f-9e053ee1e665" containerName="container-00" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.702982 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="0958d458-17b0-494f-b47f-9e053ee1e665" containerName="container-00" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.703344 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="0958d458-17b0-494f-b47f-9e053ee1e665" containerName="container-00" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.704308 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-7vflh" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.797864 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-host\") pod \"crc-debug-7vflh\" (UID: \"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c\") " pod="openshift-must-gather-jttbd/crc-debug-7vflh" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.798206 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p74pg\" (UniqueName: \"kubernetes.io/projected/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-kube-api-access-p74pg\") pod \"crc-debug-7vflh\" (UID: \"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c\") " pod="openshift-must-gather-jttbd/crc-debug-7vflh" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.900600 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p74pg\" (UniqueName: \"kubernetes.io/projected/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-kube-api-access-p74pg\") pod \"crc-debug-7vflh\" (UID: \"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c\") " pod="openshift-must-gather-jttbd/crc-debug-7vflh" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.901190 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-host\") pod \"crc-debug-7vflh\" (UID: \"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c\") " pod="openshift-must-gather-jttbd/crc-debug-7vflh" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.901342 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-host\") pod \"crc-debug-7vflh\" (UID: \"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c\") " pod="openshift-must-gather-jttbd/crc-debug-7vflh" Dec 11 10:55:06 crc kubenswrapper[4746]: I1211 10:55:06.920969 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p74pg\" (UniqueName: \"kubernetes.io/projected/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-kube-api-access-p74pg\") pod \"crc-debug-7vflh\" (UID: \"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c\") " pod="openshift-must-gather-jttbd/crc-debug-7vflh" Dec 11 10:55:07 crc kubenswrapper[4746]: I1211 10:55:07.027336 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-7vflh" Dec 11 10:55:07 crc kubenswrapper[4746]: I1211 10:55:07.281236 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/crc-debug-7vflh" event={"ID":"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c","Type":"ContainerStarted","Data":"91ec970caf25e002de44e86a85a7e8a151f4ac5ddc6c51ef69275597c9a366b8"} Dec 11 10:55:08 crc kubenswrapper[4746]: I1211 10:55:08.297745 4746 generic.go:334] "Generic (PLEG): container finished" podID="d7c2c34d-e10b-4409-9c26-aa4ebf0db66c" containerID="8b35f5fe8e2e4baff663902250ad24bc4c6fc925f23694d876d9fe331c768504" exitCode=0 Dec 11 10:55:08 crc kubenswrapper[4746]: I1211 10:55:08.297863 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/crc-debug-7vflh" event={"ID":"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c","Type":"ContainerDied","Data":"8b35f5fe8e2e4baff663902250ad24bc4c6fc925f23694d876d9fe331c768504"} Dec 11 10:55:08 crc kubenswrapper[4746]: I1211 10:55:08.870325 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jttbd/crc-debug-7vflh"] Dec 11 10:55:08 crc kubenswrapper[4746]: I1211 10:55:08.879173 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jttbd/crc-debug-7vflh"] Dec 11 10:55:09 crc kubenswrapper[4746]: I1211 10:55:09.418918 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-7vflh" Dec 11 10:55:09 crc kubenswrapper[4746]: I1211 10:55:09.571426 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-host\") pod \"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c\" (UID: \"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c\") " Dec 11 10:55:09 crc kubenswrapper[4746]: I1211 10:55:09.571533 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p74pg\" (UniqueName: \"kubernetes.io/projected/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-kube-api-access-p74pg\") pod \"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c\" (UID: \"d7c2c34d-e10b-4409-9c26-aa4ebf0db66c\") " Dec 11 10:55:09 crc kubenswrapper[4746]: I1211 10:55:09.572385 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-host" (OuterVolumeSpecName: "host") pod "d7c2c34d-e10b-4409-9c26-aa4ebf0db66c" (UID: "d7c2c34d-e10b-4409-9c26-aa4ebf0db66c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:55:09 crc kubenswrapper[4746]: I1211 10:55:09.584740 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-kube-api-access-p74pg" (OuterVolumeSpecName: "kube-api-access-p74pg") pod "d7c2c34d-e10b-4409-9c26-aa4ebf0db66c" (UID: "d7c2c34d-e10b-4409-9c26-aa4ebf0db66c"). InnerVolumeSpecName "kube-api-access-p74pg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:55:09 crc kubenswrapper[4746]: I1211 10:55:09.652261 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7c2c34d-e10b-4409-9c26-aa4ebf0db66c" path="/var/lib/kubelet/pods/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c/volumes" Dec 11 10:55:09 crc kubenswrapper[4746]: I1211 10:55:09.674147 4746 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-host\") on node \"crc\" DevicePath \"\"" Dec 11 10:55:09 crc kubenswrapper[4746]: I1211 10:55:09.674181 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p74pg\" (UniqueName: \"kubernetes.io/projected/d7c2c34d-e10b-4409-9c26-aa4ebf0db66c-kube-api-access-p74pg\") on node \"crc\" DevicePath \"\"" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.056236 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jttbd/crc-debug-xvg57"] Dec 11 10:55:10 crc kubenswrapper[4746]: E1211 10:55:10.056914 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7c2c34d-e10b-4409-9c26-aa4ebf0db66c" containerName="container-00" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.056947 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7c2c34d-e10b-4409-9c26-aa4ebf0db66c" containerName="container-00" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.057368 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7c2c34d-e10b-4409-9c26-aa4ebf0db66c" containerName="container-00" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.058373 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-xvg57" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.183348 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2nn\" (UniqueName: \"kubernetes.io/projected/c80226d5-f2cc-4b1a-8369-eac06ef433cf-kube-api-access-bt2nn\") pod \"crc-debug-xvg57\" (UID: \"c80226d5-f2cc-4b1a-8369-eac06ef433cf\") " pod="openshift-must-gather-jttbd/crc-debug-xvg57" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.183425 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c80226d5-f2cc-4b1a-8369-eac06ef433cf-host\") pod \"crc-debug-xvg57\" (UID: \"c80226d5-f2cc-4b1a-8369-eac06ef433cf\") " pod="openshift-must-gather-jttbd/crc-debug-xvg57" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.285728 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt2nn\" (UniqueName: \"kubernetes.io/projected/c80226d5-f2cc-4b1a-8369-eac06ef433cf-kube-api-access-bt2nn\") pod \"crc-debug-xvg57\" (UID: \"c80226d5-f2cc-4b1a-8369-eac06ef433cf\") " pod="openshift-must-gather-jttbd/crc-debug-xvg57" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.285835 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c80226d5-f2cc-4b1a-8369-eac06ef433cf-host\") pod \"crc-debug-xvg57\" (UID: \"c80226d5-f2cc-4b1a-8369-eac06ef433cf\") " pod="openshift-must-gather-jttbd/crc-debug-xvg57" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.286086 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c80226d5-f2cc-4b1a-8369-eac06ef433cf-host\") pod \"crc-debug-xvg57\" (UID: \"c80226d5-f2cc-4b1a-8369-eac06ef433cf\") " pod="openshift-must-gather-jttbd/crc-debug-xvg57" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.309880 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt2nn\" (UniqueName: \"kubernetes.io/projected/c80226d5-f2cc-4b1a-8369-eac06ef433cf-kube-api-access-bt2nn\") pod \"crc-debug-xvg57\" (UID: \"c80226d5-f2cc-4b1a-8369-eac06ef433cf\") " pod="openshift-must-gather-jttbd/crc-debug-xvg57" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.319949 4746 scope.go:117] "RemoveContainer" containerID="8b35f5fe8e2e4baff663902250ad24bc4c6fc925f23694d876d9fe331c768504" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.319991 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-7vflh" Dec 11 10:55:10 crc kubenswrapper[4746]: I1211 10:55:10.380983 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-xvg57" Dec 11 10:55:11 crc kubenswrapper[4746]: I1211 10:55:11.334918 4746 generic.go:334] "Generic (PLEG): container finished" podID="c80226d5-f2cc-4b1a-8369-eac06ef433cf" containerID="b250ba976cfc0911e95fb9c0a94c0cfacf2e836867cb615a5ad4f7e17adaa6f6" exitCode=0 Dec 11 10:55:11 crc kubenswrapper[4746]: I1211 10:55:11.335017 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/crc-debug-xvg57" event={"ID":"c80226d5-f2cc-4b1a-8369-eac06ef433cf","Type":"ContainerDied","Data":"b250ba976cfc0911e95fb9c0a94c0cfacf2e836867cb615a5ad4f7e17adaa6f6"} Dec 11 10:55:11 crc kubenswrapper[4746]: I1211 10:55:11.335370 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/crc-debug-xvg57" event={"ID":"c80226d5-f2cc-4b1a-8369-eac06ef433cf","Type":"ContainerStarted","Data":"bf580a5330130364466dae0218b3fe285550afba2efe24bc0fb7ede997c7ab65"} Dec 11 10:55:11 crc kubenswrapper[4746]: I1211 10:55:11.378609 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jttbd/crc-debug-xvg57"] Dec 11 10:55:11 crc kubenswrapper[4746]: I1211 10:55:11.387971 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jttbd/crc-debug-xvg57"] Dec 11 10:55:12 crc kubenswrapper[4746]: I1211 10:55:12.450407 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-xvg57" Dec 11 10:55:12 crc kubenswrapper[4746]: I1211 10:55:12.634221 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c80226d5-f2cc-4b1a-8369-eac06ef433cf-host\") pod \"c80226d5-f2cc-4b1a-8369-eac06ef433cf\" (UID: \"c80226d5-f2cc-4b1a-8369-eac06ef433cf\") " Dec 11 10:55:12 crc kubenswrapper[4746]: I1211 10:55:12.634317 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt2nn\" (UniqueName: \"kubernetes.io/projected/c80226d5-f2cc-4b1a-8369-eac06ef433cf-kube-api-access-bt2nn\") pod \"c80226d5-f2cc-4b1a-8369-eac06ef433cf\" (UID: \"c80226d5-f2cc-4b1a-8369-eac06ef433cf\") " Dec 11 10:55:12 crc kubenswrapper[4746]: I1211 10:55:12.634370 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c80226d5-f2cc-4b1a-8369-eac06ef433cf-host" (OuterVolumeSpecName: "host") pod "c80226d5-f2cc-4b1a-8369-eac06ef433cf" (UID: "c80226d5-f2cc-4b1a-8369-eac06ef433cf"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 10:55:12 crc kubenswrapper[4746]: I1211 10:55:12.635253 4746 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c80226d5-f2cc-4b1a-8369-eac06ef433cf-host\") on node \"crc\" DevicePath \"\"" Dec 11 10:55:12 crc kubenswrapper[4746]: I1211 10:55:12.640077 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80226d5-f2cc-4b1a-8369-eac06ef433cf-kube-api-access-bt2nn" (OuterVolumeSpecName: "kube-api-access-bt2nn") pod "c80226d5-f2cc-4b1a-8369-eac06ef433cf" (UID: "c80226d5-f2cc-4b1a-8369-eac06ef433cf"). InnerVolumeSpecName "kube-api-access-bt2nn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:55:12 crc kubenswrapper[4746]: I1211 10:55:12.737332 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt2nn\" (UniqueName: \"kubernetes.io/projected/c80226d5-f2cc-4b1a-8369-eac06ef433cf-kube-api-access-bt2nn\") on node \"crc\" DevicePath \"\"" Dec 11 10:55:13 crc kubenswrapper[4746]: I1211 10:55:13.356954 4746 scope.go:117] "RemoveContainer" containerID="b250ba976cfc0911e95fb9c0a94c0cfacf2e836867cb615a5ad4f7e17adaa6f6" Dec 11 10:55:13 crc kubenswrapper[4746]: I1211 10:55:13.357171 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/crc-debug-xvg57" Dec 11 10:55:13 crc kubenswrapper[4746]: I1211 10:55:13.641732 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80226d5-f2cc-4b1a-8369-eac06ef433cf" path="/var/lib/kubelet/pods/c80226d5-f2cc-4b1a-8369-eac06ef433cf/volumes" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.060565 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-d68478596-8jx82_80265cad-1b6f-4dfc-aee2-04a1da6152fc/barbican-api/0.log" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.207763 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-d68478596-8jx82_80265cad-1b6f-4dfc-aee2-04a1da6152fc/barbican-api-log/0.log" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.308362 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d4579cd86-47qwg_6346d2a5-4279-407e-981e-423993612a5c/barbican-keystone-listener/0.log" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.377819 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d4579cd86-47qwg_6346d2a5-4279-407e-981e-423993612a5c/barbican-keystone-listener-log/0.log" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.540154 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55c6884c59-pb7cb_17f2e937-e45d-48e4-be34-f013cb61dc7e/barbican-worker/0.log" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.546611 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55c6884c59-pb7cb_17f2e937-e45d-48e4-be34-f013cb61dc7e/barbican-worker-log/0.log" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.734378 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj_3de7e541-4120-4c78-866b-9991eb4d1810/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.797793 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad/ceilometer-central-agent/0.log" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.907548 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad/ceilometer-notification-agent/0.log" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.989511 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad/sg-core/0.log" Dec 11 10:55:28 crc kubenswrapper[4746]: I1211 10:55:28.990773 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad/proxy-httpd/0.log" Dec 11 10:55:29 crc kubenswrapper[4746]: I1211 10:55:29.179390 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_93766680-5fd5-4cc4-9ab8-128daeec573d/cinder-api/0.log" Dec 11 10:55:29 crc kubenswrapper[4746]: I1211 10:55:29.255645 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_93766680-5fd5-4cc4-9ab8-128daeec573d/cinder-api-log/0.log" Dec 11 10:55:29 crc kubenswrapper[4746]: I1211 10:55:29.459921 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_663cb4b5-0c8f-4518-9ba3-1d34e8b1949a/probe/0.log" Dec 11 10:55:29 crc kubenswrapper[4746]: I1211 10:55:29.461894 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_663cb4b5-0c8f-4518-9ba3-1d34e8b1949a/cinder-scheduler/0.log" Dec 11 10:55:29 crc kubenswrapper[4746]: I1211 10:55:29.492559 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm_f6fdd767-cd5e-4858-9c19-ebc73fd789d4/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:29 crc kubenswrapper[4746]: I1211 10:55:29.691659 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2_797c27c3-e8d6-4324-926e-e3b859e05b51/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:29 crc kubenswrapper[4746]: I1211 10:55:29.818589 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-wfn8b_0a51bb6a-6ca0-4e2d-8427-70e92cd4730d/init/0.log" Dec 11 10:55:30 crc kubenswrapper[4746]: I1211 10:55:30.119231 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-wfn8b_0a51bb6a-6ca0-4e2d-8427-70e92cd4730d/init/0.log" Dec 11 10:55:30 crc kubenswrapper[4746]: I1211 10:55:30.160084 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-wfn8b_0a51bb6a-6ca0-4e2d-8427-70e92cd4730d/dnsmasq-dns/0.log" Dec 11 10:55:30 crc kubenswrapper[4746]: I1211 10:55:30.202996 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm_87d58bd7-f602-4d6b-b16c-1178233ebe3f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:30 crc kubenswrapper[4746]: I1211 10:55:30.404270 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_db60fce8-8218-4af0-84db-6bbbe7218d4f/glance-httpd/0.log" Dec 11 10:55:30 crc kubenswrapper[4746]: I1211 10:55:30.463760 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_db60fce8-8218-4af0-84db-6bbbe7218d4f/glance-log/0.log" Dec 11 10:55:30 crc kubenswrapper[4746]: I1211 10:55:30.583380 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_805456fb-d8e0-4341-b5ad-93906c3ad0e5/glance-httpd/0.log" Dec 11 10:55:30 crc kubenswrapper[4746]: I1211 10:55:30.645828 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_805456fb-d8e0-4341-b5ad-93906c3ad0e5/glance-log/0.log" Dec 11 10:55:30 crc kubenswrapper[4746]: I1211 10:55:30.864173 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b7f654f86-sh94c_b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81/horizon/0.log" Dec 11 10:55:30 crc kubenswrapper[4746]: I1211 10:55:30.996963 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc_a9076913-deed-4328-8c4b-147c3f7bac9a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:31 crc kubenswrapper[4746]: I1211 10:55:31.264170 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b7f654f86-sh94c_b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81/horizon-log/0.log" Dec 11 10:55:31 crc kubenswrapper[4746]: I1211 10:55:31.265328 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-94sts_9a09e7c3-6aed-4155-bbe9-7be9b885cd57/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:31 crc kubenswrapper[4746]: I1211 10:55:31.514455 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a12b0580-9910-43bc-ac49-bbb03f54211b/kube-state-metrics/0.log" Dec 11 10:55:31 crc kubenswrapper[4746]: I1211 10:55:31.621929 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-65c49f59b-9mqvh_689e0dd9-7055-4ca2-81b3-c66d9850e166/keystone-api/0.log" Dec 11 10:55:31 crc kubenswrapper[4746]: I1211 10:55:31.995704 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-85b2t_1424dbeb-f9d9-48d1-8b92-14828c8ea326/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:32 crc kubenswrapper[4746]: I1211 10:55:32.560029 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7754896b7c-5hf99_217bbeb1-db62-4c24-82de-be79c9bad92b/neutron-httpd/0.log" Dec 11 10:55:32 crc kubenswrapper[4746]: I1211 10:55:32.591880 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7754896b7c-5hf99_217bbeb1-db62-4c24-82de-be79c9bad92b/neutron-api/0.log" Dec 11 10:55:32 crc kubenswrapper[4746]: I1211 10:55:32.903480 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw_e1498cd2-84fe-4769-8fc5-ffe9f8e32251/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:33 crc kubenswrapper[4746]: I1211 10:55:33.459850 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5af9e7da-bc61-40ee-8c58-9f2201d12884/nova-cell0-conductor-conductor/0.log" Dec 11 10:55:33 crc kubenswrapper[4746]: I1211 10:55:33.532149 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_015a8233-ebde-4703-a8bb-81267822daaa/nova-api-log/0.log" Dec 11 10:55:33 crc kubenswrapper[4746]: I1211 10:55:33.619081 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_015a8233-ebde-4703-a8bb-81267822daaa/nova-api-api/0.log" Dec 11 10:55:33 crc kubenswrapper[4746]: I1211 10:55:33.855525 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3d3d3996-336f-4ca7-a8eb-16a243b55115/nova-cell1-conductor-conductor/0.log" Dec 11 10:55:33 crc kubenswrapper[4746]: I1211 10:55:33.922526 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_12f63c9f-c2d2-45a1-99b7-ee148b220e4d/nova-cell1-novncproxy-novncproxy/0.log" Dec 11 10:55:34 crc kubenswrapper[4746]: I1211 10:55:34.093949 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-jsx69_731d2759-47a3-4e5e-a753-e2cb1cb7c982/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:34 crc kubenswrapper[4746]: I1211 10:55:34.278386 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3dcbca6a-41cb-489b-9632-00e734e2c95b/nova-metadata-log/0.log" Dec 11 10:55:34 crc kubenswrapper[4746]: I1211 10:55:34.557610 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_5486ad2e-b2db-4967-8308-592b79065f54/nova-scheduler-scheduler/0.log" Dec 11 10:55:34 crc kubenswrapper[4746]: I1211 10:55:34.618456 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5170f24-7cb7-43d5-bacc-c8224cfabcf4/mysql-bootstrap/0.log" Dec 11 10:55:35 crc kubenswrapper[4746]: I1211 10:55:35.031128 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5170f24-7cb7-43d5-bacc-c8224cfabcf4/mysql-bootstrap/0.log" Dec 11 10:55:35 crc kubenswrapper[4746]: I1211 10:55:35.031692 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5170f24-7cb7-43d5-bacc-c8224cfabcf4/galera/0.log" Dec 11 10:55:35 crc kubenswrapper[4746]: I1211 10:55:35.282905 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f35f21ce-59cb-4ee0-850c-9aba4010c890/mysql-bootstrap/0.log" Dec 11 10:55:35 crc kubenswrapper[4746]: I1211 10:55:35.503002 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f35f21ce-59cb-4ee0-850c-9aba4010c890/mysql-bootstrap/0.log" Dec 11 10:55:35 crc kubenswrapper[4746]: I1211 10:55:35.515096 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f35f21ce-59cb-4ee0-850c-9aba4010c890/galera/0.log" Dec 11 10:55:35 crc kubenswrapper[4746]: I1211 10:55:35.736622 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f39575aa-fcfa-42ad-aceb-a8611602030f/openstackclient/0.log" Dec 11 10:55:35 crc kubenswrapper[4746]: I1211 10:55:35.792829 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-242vs_31760b52-7caf-49dd-bf1e-2d2f88b000a2/ovn-controller/0.log" Dec 11 10:55:35 crc kubenswrapper[4746]: I1211 10:55:35.868683 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3dcbca6a-41cb-489b-9632-00e734e2c95b/nova-metadata-metadata/0.log" Dec 11 10:55:36 crc kubenswrapper[4746]: I1211 10:55:36.107790 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5jlj_fbd694c4-2e54-4535-a357-0fb7ffdcabdb/ovsdb-server-init/0.log" Dec 11 10:55:36 crc kubenswrapper[4746]: I1211 10:55:36.133681 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jklpb_bc3dc4dd-014a-42fe-a1e7-ee2d10866d75/openstack-network-exporter/0.log" Dec 11 10:55:36 crc kubenswrapper[4746]: I1211 10:55:36.320594 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5jlj_fbd694c4-2e54-4535-a357-0fb7ffdcabdb/ovs-vswitchd/0.log" Dec 11 10:55:36 crc kubenswrapper[4746]: I1211 10:55:36.320690 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5jlj_fbd694c4-2e54-4535-a357-0fb7ffdcabdb/ovsdb-server-init/0.log" Dec 11 10:55:36 crc kubenswrapper[4746]: I1211 10:55:36.453414 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5jlj_fbd694c4-2e54-4535-a357-0fb7ffdcabdb/ovsdb-server/0.log" Dec 11 10:55:36 crc kubenswrapper[4746]: I1211 10:55:36.563601 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-tn2d9_a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:36 crc kubenswrapper[4746]: I1211 10:55:36.617342 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_01f4f65b-37fc-4500-a9ba-ba3a717c37bb/openstack-network-exporter/0.log" Dec 11 10:55:36 crc kubenswrapper[4746]: I1211 10:55:36.732377 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_01f4f65b-37fc-4500-a9ba-ba3a717c37bb/ovn-northd/0.log" Dec 11 10:55:36 crc kubenswrapper[4746]: I1211 10:55:36.808358 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_44843435-5bdd-416c-af49-abc0ce7c6c03/openstack-network-exporter/0.log" Dec 11 10:55:36 crc kubenswrapper[4746]: I1211 10:55:36.896566 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_44843435-5bdd-416c-af49-abc0ce7c6c03/ovsdbserver-nb/0.log" Dec 11 10:55:37 crc kubenswrapper[4746]: I1211 10:55:37.071278 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f17050a1-f53d-4058-9b22-1d26754f13d0/ovsdbserver-sb/0.log" Dec 11 10:55:37 crc kubenswrapper[4746]: I1211 10:55:37.103168 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f17050a1-f53d-4058-9b22-1d26754f13d0/openstack-network-exporter/0.log" Dec 11 10:55:37 crc kubenswrapper[4746]: I1211 10:55:37.380912 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bb679f888-dj844_621d56dd-8011-4236-a393-6b57891b3f37/placement-log/0.log" Dec 11 10:55:37 crc kubenswrapper[4746]: I1211 10:55:37.388750 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bb679f888-dj844_621d56dd-8011-4236-a393-6b57891b3f37/placement-api/0.log" Dec 11 10:55:37 crc kubenswrapper[4746]: I1211 10:55:37.465933 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e96da7ab-1e2a-4f9c-bb48-9955198a646a/setup-container/0.log" Dec 11 10:55:37 crc kubenswrapper[4746]: I1211 10:55:37.739799 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e96da7ab-1e2a-4f9c-bb48-9955198a646a/setup-container/0.log" Dec 11 10:55:37 crc kubenswrapper[4746]: I1211 10:55:37.776371 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e96da7ab-1e2a-4f9c-bb48-9955198a646a/rabbitmq/0.log" Dec 11 10:55:37 crc kubenswrapper[4746]: I1211 10:55:37.778642 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9c61eb65-bc9f-4b9f-84c8-286e25295809/setup-container/0.log" Dec 11 10:55:37 crc kubenswrapper[4746]: I1211 10:55:37.945600 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9c61eb65-bc9f-4b9f-84c8-286e25295809/setup-container/0.log" Dec 11 10:55:38 crc kubenswrapper[4746]: I1211 10:55:38.046904 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9c61eb65-bc9f-4b9f-84c8-286e25295809/rabbitmq/0.log" Dec 11 10:55:38 crc kubenswrapper[4746]: I1211 10:55:38.099248 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw_a9fa6020-1f64-4cc4-8b95-7372a5ce6f92/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:38 crc kubenswrapper[4746]: I1211 10:55:38.338460 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9j64g_e9138713-a26f-45a2-8222-3bb43892a757/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:38 crc kubenswrapper[4746]: I1211 10:55:38.345894 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h_30f79518-b92a-4058-8834-45c45c284eee/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:38 crc kubenswrapper[4746]: I1211 10:55:38.591426 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5vqjm_6da9c642-e03d-463d-a3f1-c74bb27843c2/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:38 crc kubenswrapper[4746]: I1211 10:55:38.667728 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5c2w5_ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c/ssh-known-hosts-edpm-deployment/0.log" Dec 11 10:55:38 crc kubenswrapper[4746]: I1211 10:55:38.836418 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7cc45cfb45-bbbq8_5c65e9de-7890-47aa-bcf7-48cdfd6dd262/proxy-server/0.log" Dec 11 10:55:38 crc kubenswrapper[4746]: I1211 10:55:38.953122 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7cc45cfb45-bbbq8_5c65e9de-7890-47aa-bcf7-48cdfd6dd262/proxy-httpd/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.032358 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wrnch_94f9d09a-c638-4da1-a6e0-3337621da894/swift-ring-rebalance/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.189667 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/account-reaper/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.216177 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/account-auditor/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.272331 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/account-replicator/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.410181 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/account-server/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.467956 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/container-auditor/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.513645 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/container-replicator/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.556074 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/container-server/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.648333 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/container-updater/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.723817 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/object-auditor/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.748550 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/object-expirer/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.837107 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/object-replicator/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.931464 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/object-updater/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.941749 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/object-server/0.log" Dec 11 10:55:39 crc kubenswrapper[4746]: I1211 10:55:39.978174 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/rsync/0.log" Dec 11 10:55:40 crc kubenswrapper[4746]: I1211 10:55:40.105452 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/swift-recon-cron/0.log" Dec 11 10:55:40 crc kubenswrapper[4746]: I1211 10:55:40.215216 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4_c19e1748-770d-45a1-b823-77a77b6f22a4/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:40 crc kubenswrapper[4746]: I1211 10:55:40.615462 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_dac76301-3400-4177-8a19-8b97a7480321/tempest-tests-tempest-tests-runner/0.log" Dec 11 10:55:40 crc kubenswrapper[4746]: I1211 10:55:40.725494 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c7853e46-c0f0-403f-b095-fc60d413a35f/test-operator-logs-container/0.log" Dec 11 10:55:40 crc kubenswrapper[4746]: I1211 10:55:40.852774 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mw75t_0d5a90b1-c946-4b31-9337-9b13d58f9819/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 10:55:51 crc kubenswrapper[4746]: I1211 10:55:51.496724 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_a99efa0a-c9ba-4a4e-9014-fe1efed47a8a/memcached/0.log" Dec 11 10:56:07 crc kubenswrapper[4746]: I1211 10:56:07.203647 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/util/0.log" Dec 11 10:56:07 crc kubenswrapper[4746]: I1211 10:56:07.347129 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/util/0.log" Dec 11 10:56:07 crc kubenswrapper[4746]: I1211 10:56:07.406124 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/pull/0.log" Dec 11 10:56:07 crc kubenswrapper[4746]: I1211 10:56:07.412171 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/pull/0.log" Dec 11 10:56:07 crc kubenswrapper[4746]: I1211 10:56:07.607865 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/pull/0.log" Dec 11 10:56:07 crc kubenswrapper[4746]: I1211 10:56:07.682847 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/extract/0.log" Dec 11 10:56:07 crc kubenswrapper[4746]: I1211 10:56:07.686440 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/util/0.log" Dec 11 10:56:07 crc kubenswrapper[4746]: I1211 10:56:07.842135 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-7dcw7_3b4184f0-3e35-4e70-9adc-87a4681c343c/kube-rbac-proxy/0.log" Dec 11 10:56:07 crc kubenswrapper[4746]: I1211 10:56:07.969100 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-7dcw7_3b4184f0-3e35-4e70-9adc-87a4681c343c/manager/0.log" Dec 11 10:56:07 crc kubenswrapper[4746]: I1211 10:56:07.989114 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-glgvn_b0e6f3b3-a8b7-4bca-8e55-118bd35a9635/kube-rbac-proxy/0.log" Dec 11 10:56:08 crc kubenswrapper[4746]: I1211 10:56:08.120278 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-glgvn_b0e6f3b3-a8b7-4bca-8e55-118bd35a9635/manager/0.log" Dec 11 10:56:08 crc kubenswrapper[4746]: I1211 10:56:08.197535 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-bkvrk_f9f2bc47-53f4-4216-8fb2-27f2db87123e/manager/0.log" Dec 11 10:56:08 crc kubenswrapper[4746]: I1211 10:56:08.225593 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-bkvrk_f9f2bc47-53f4-4216-8fb2-27f2db87123e/kube-rbac-proxy/0.log" Dec 11 10:56:08 crc kubenswrapper[4746]: I1211 10:56:08.443343 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-5gjg9_01451fbe-7fd7-447c-b6ef-967f7ddff94b/kube-rbac-proxy/0.log" Dec 11 10:56:08 crc kubenswrapper[4746]: I1211 10:56:08.455968 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-5gjg9_01451fbe-7fd7-447c-b6ef-967f7ddff94b/manager/0.log" Dec 11 10:56:08 crc kubenswrapper[4746]: I1211 10:56:08.614112 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-dntwk_2d353fc2-d0c0-47ed-be04-acc87fd980a7/kube-rbac-proxy/0.log" Dec 11 10:56:08 crc kubenswrapper[4746]: I1211 10:56:08.637897 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-dntwk_2d353fc2-d0c0-47ed-be04-acc87fd980a7/manager/0.log" Dec 11 10:56:08 crc kubenswrapper[4746]: I1211 10:56:08.716393 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-7ftgl_fecc9092-bba1-4488-af41-3d970dba0968/kube-rbac-proxy/0.log" Dec 11 10:56:08 crc kubenswrapper[4746]: I1211 10:56:08.826823 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-7ftgl_fecc9092-bba1-4488-af41-3d970dba0968/manager/0.log" Dec 11 10:56:08 crc kubenswrapper[4746]: I1211 10:56:08.921017 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-n8c44_4a11cb95-3107-4526-8ab3-82bb6fd57cef/kube-rbac-proxy/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.065864 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-zstsf_b47efaee-6921-4f8b-876a-3cf52bd10a27/kube-rbac-proxy/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.143310 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-n8c44_4a11cb95-3107-4526-8ab3-82bb6fd57cef/manager/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.190176 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-zstsf_b47efaee-6921-4f8b-876a-3cf52bd10a27/manager/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.293595 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-x5ghz_9b6740eb-7439-465a-b30a-c838a4d65be6/kube-rbac-proxy/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.397576 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-x5ghz_9b6740eb-7439-465a-b30a-c838a4d65be6/manager/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.506476 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-8pn4t_252b923b-a265-46c1-8c3e-9ef62d5b1f7a/kube-rbac-proxy/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.516858 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-8pn4t_252b923b-a265-46c1-8c3e-9ef62d5b1f7a/manager/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.640033 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-wd9vj_4c437995-b526-4ae3-9956-b541694d54d4/kube-rbac-proxy/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.712563 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-wd9vj_4c437995-b526-4ae3-9956-b541694d54d4/manager/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.814571 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-vr9wq_efe16578-2d6a-40a9-9f8c-9b868a6d6a66/kube-rbac-proxy/0.log" Dec 11 10:56:09 crc kubenswrapper[4746]: I1211 10:56:09.915242 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-vr9wq_efe16578-2d6a-40a9-9f8c-9b868a6d6a66/manager/0.log" Dec 11 10:56:10 crc kubenswrapper[4746]: I1211 10:56:10.008446 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-5c8g4_5d8442a7-c511-4f69-b04e-45e750f27bfa/kube-rbac-proxy/0.log" Dec 11 10:56:10 crc kubenswrapper[4746]: I1211 10:56:10.139263 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-5c8g4_5d8442a7-c511-4f69-b04e-45e750f27bfa/manager/0.log" Dec 11 10:56:10 crc kubenswrapper[4746]: I1211 10:56:10.205854 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-g9qlp_9ea7dd8b-4871-43c0-a66f-113742627a6b/kube-rbac-proxy/0.log" Dec 11 10:56:10 crc kubenswrapper[4746]: I1211 10:56:10.250382 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-g9qlp_9ea7dd8b-4871-43c0-a66f-113742627a6b/manager/0.log" Dec 11 10:56:10 crc kubenswrapper[4746]: I1211 10:56:10.419945 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f8tnmf_648adc18-f046-4dcf-9a52-c69946ffa83a/kube-rbac-proxy/0.log" Dec 11 10:56:10 crc kubenswrapper[4746]: I1211 10:56:10.424464 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f8tnmf_648adc18-f046-4dcf-9a52-c69946ffa83a/manager/0.log" Dec 11 10:56:10 crc kubenswrapper[4746]: I1211 10:56:10.837514 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8bb46fc5c-mr76h_a9a072f8-67ea-43f6-a43a-1f553a050f11/operator/0.log" Dec 11 10:56:10 crc kubenswrapper[4746]: I1211 10:56:10.902715 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hsbv7_f80e03b9-de97-4ae0-bbcd-edc0079f20f3/registry-server/0.log" Dec 11 10:56:11 crc kubenswrapper[4746]: I1211 10:56:11.152489 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-kklt5_798a9e32-0bc8-4231-834a-fc2b002c87aa/kube-rbac-proxy/0.log" Dec 11 10:56:11 crc kubenswrapper[4746]: I1211 10:56:11.247726 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-kklt5_798a9e32-0bc8-4231-834a-fc2b002c87aa/manager/0.log" Dec 11 10:56:11 crc kubenswrapper[4746]: I1211 10:56:11.376018 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-882l4_778c0ffc-7a48-4159-8a1b-f34a805bc1ae/kube-rbac-proxy/0.log" Dec 11 10:56:11 crc kubenswrapper[4746]: I1211 10:56:11.458635 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-882l4_778c0ffc-7a48-4159-8a1b-f34a805bc1ae/manager/0.log" Dec 11 10:56:11 crc kubenswrapper[4746]: I1211 10:56:11.490792 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-rq8z4_23f6b30a-57a8-4920-ab2e-dfebef4d9ce6/operator/0.log" Dec 11 10:56:11 crc kubenswrapper[4746]: I1211 10:56:11.686356 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-b797r_3b8201ce-fb41-4474-9609-689fe0d093ec/kube-rbac-proxy/0.log" Dec 11 10:56:11 crc kubenswrapper[4746]: I1211 10:56:11.784074 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-b797r_3b8201ce-fb41-4474-9609-689fe0d093ec/manager/0.log" Dec 11 10:56:11 crc kubenswrapper[4746]: I1211 10:56:11.841610 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-686fb77d86-hmhr5_5d1a162f-09fe-4a7a-854e-3236282b3189/manager/0.log" Dec 11 10:56:11 crc kubenswrapper[4746]: I1211 10:56:11.926797 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-kz5f2_4890e377-1482-4341-b002-bb54e05d5ded/kube-rbac-proxy/0.log" Dec 11 10:56:12 crc kubenswrapper[4746]: I1211 10:56:12.012522 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-kz5f2_4890e377-1482-4341-b002-bb54e05d5ded/manager/0.log" Dec 11 10:56:12 crc kubenswrapper[4746]: I1211 10:56:12.029159 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-zbwxc_00e181e7-8b84-49f6-96c5-4da046644469/kube-rbac-proxy/0.log" Dec 11 10:56:12 crc kubenswrapper[4746]: I1211 10:56:12.050928 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-zbwxc_00e181e7-8b84-49f6-96c5-4da046644469/manager/0.log" Dec 11 10:56:12 crc kubenswrapper[4746]: I1211 10:56:12.207117 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-sg2q4_5f3fcc59-b850-4041-84b3-9ccc788c73fc/manager/0.log" Dec 11 10:56:12 crc kubenswrapper[4746]: I1211 10:56:12.218283 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-sg2q4_5f3fcc59-b850-4041-84b3-9ccc788c73fc/kube-rbac-proxy/0.log" Dec 11 10:56:30 crc kubenswrapper[4746]: I1211 10:56:30.640284 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4t7sz_3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb/control-plane-machine-set-operator/0.log" Dec 11 10:56:30 crc kubenswrapper[4746]: I1211 10:56:30.850988 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lwl94_b1196114-7a7a-4f77-951a-20d10c32d0b2/kube-rbac-proxy/0.log" Dec 11 10:56:30 crc kubenswrapper[4746]: I1211 10:56:30.944796 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lwl94_b1196114-7a7a-4f77-951a-20d10c32d0b2/machine-api-operator/0.log" Dec 11 10:56:42 crc kubenswrapper[4746]: I1211 10:56:42.916901 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-thbcd_c5782e13-fb8a-4d0c-b0b2-9649898453d7/cert-manager-controller/0.log" Dec 11 10:56:43 crc kubenswrapper[4746]: I1211 10:56:43.052266 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-j8m6k_421d5070-53ff-451a-bc95-3b8e966afd09/cert-manager-cainjector/0.log" Dec 11 10:56:43 crc kubenswrapper[4746]: I1211 10:56:43.140730 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-hww2m_3eb397c1-a790-47d8-9b3f-93030517ef10/cert-manager-webhook/0.log" Dec 11 10:56:55 crc kubenswrapper[4746]: I1211 10:56:55.056302 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-rzgk7_7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f/nmstate-console-plugin/0.log" Dec 11 10:56:55 crc kubenswrapper[4746]: I1211 10:56:55.251142 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9kf4z_54a843e2-1db9-49db-89e5-5254b7b50bab/nmstate-handler/0.log" Dec 11 10:56:55 crc kubenswrapper[4746]: I1211 10:56:55.326397 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-2sdgm_a911ba40-1cb3-4447-8f86-b03341052ae8/kube-rbac-proxy/0.log" Dec 11 10:56:55 crc kubenswrapper[4746]: I1211 10:56:55.337657 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-2sdgm_a911ba40-1cb3-4447-8f86-b03341052ae8/nmstate-metrics/0.log" Dec 11 10:56:55 crc kubenswrapper[4746]: I1211 10:56:55.574575 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-dln6r_f3af9a18-e9fe-429b-988a-4289790515b6/nmstate-operator/0.log" Dec 11 10:56:55 crc kubenswrapper[4746]: I1211 10:56:55.587755 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-vt7v4_9b209fd0-9f8c-4608-99df-7c691450b004/nmstate-webhook/0.log" Dec 11 10:56:59 crc kubenswrapper[4746]: I1211 10:56:59.878227 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:56:59 crc kubenswrapper[4746]: I1211 10:56:59.878851 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:57:10 crc kubenswrapper[4746]: I1211 10:57:10.264433 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-pfns9_912d8133-522c-4e88-a253-bf9be07b4d13/kube-rbac-proxy/0.log" Dec 11 10:57:10 crc kubenswrapper[4746]: I1211 10:57:10.362126 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-pfns9_912d8133-522c-4e88-a253-bf9be07b4d13/controller/0.log" Dec 11 10:57:10 crc kubenswrapper[4746]: I1211 10:57:10.495925 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-frr-files/0.log" Dec 11 10:57:10 crc kubenswrapper[4746]: I1211 10:57:10.681100 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-metrics/0.log" Dec 11 10:57:10 crc kubenswrapper[4746]: I1211 10:57:10.681723 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-frr-files/0.log" Dec 11 10:57:10 crc kubenswrapper[4746]: I1211 10:57:10.683618 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-reloader/0.log" Dec 11 10:57:10 crc kubenswrapper[4746]: I1211 10:57:10.700130 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-reloader/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.026059 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-reloader/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.033383 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-metrics/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.036873 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-metrics/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.057525 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-frr-files/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.316777 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-frr-files/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.395624 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-metrics/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.514938 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-reloader/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.538886 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/controller/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.741025 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/frr-metrics/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.831543 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/kube-rbac-proxy-frr/0.log" Dec 11 10:57:11 crc kubenswrapper[4746]: I1211 10:57:11.872219 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/kube-rbac-proxy/0.log" Dec 11 10:57:12 crc kubenswrapper[4746]: I1211 10:57:12.065078 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/reloader/0.log" Dec 11 10:57:12 crc kubenswrapper[4746]: I1211 10:57:12.080124 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-bs5bz_a7040058-21f6-4b31-8369-5c8c471f9cf6/frr-k8s-webhook-server/0.log" Dec 11 10:57:12 crc kubenswrapper[4746]: I1211 10:57:12.325548 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-675c7b7dd8-6mxrg_5803af05-a3ac-403a-88f6-4b7fb21678d0/manager/0.log" Dec 11 10:57:12 crc kubenswrapper[4746]: I1211 10:57:12.562807 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-797f9db975-fpbhm_5b01faa1-3b2c-448d-8285-217c6dbacc16/webhook-server/0.log" Dec 11 10:57:12 crc kubenswrapper[4746]: I1211 10:57:12.589074 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rqt4v_cc662df8-2d5c-41e8-919a-dc8d1f4d20d8/kube-rbac-proxy/0.log" Dec 11 10:57:13 crc kubenswrapper[4746]: I1211 10:57:13.105226 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/frr/0.log" Dec 11 10:57:13 crc kubenswrapper[4746]: I1211 10:57:13.330345 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rqt4v_cc662df8-2d5c-41e8-919a-dc8d1f4d20d8/speaker/0.log" Dec 11 10:57:27 crc kubenswrapper[4746]: I1211 10:57:27.199294 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/util/0.log" Dec 11 10:57:27 crc kubenswrapper[4746]: I1211 10:57:27.395992 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/util/0.log" Dec 11 10:57:27 crc kubenswrapper[4746]: I1211 10:57:27.436022 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/pull/0.log" Dec 11 10:57:27 crc kubenswrapper[4746]: I1211 10:57:27.436726 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/pull/0.log" Dec 11 10:57:27 crc kubenswrapper[4746]: I1211 10:57:27.660952 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/pull/0.log" Dec 11 10:57:27 crc kubenswrapper[4746]: I1211 10:57:27.667653 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/util/0.log" Dec 11 10:57:27 crc kubenswrapper[4746]: I1211 10:57:27.668950 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/extract/0.log" Dec 11 10:57:27 crc kubenswrapper[4746]: I1211 10:57:27.839427 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/util/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.045811 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/util/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.086974 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/pull/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.087038 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/pull/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.246322 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/pull/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.252775 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/util/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.291410 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/extract/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.439070 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-utilities/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.650138 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-utilities/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.658004 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-content/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.712249 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-content/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.851688 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-utilities/0.log" Dec 11 10:57:28 crc kubenswrapper[4746]: I1211 10:57:28.917392 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-content/0.log" Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.170498 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-utilities/0.log" Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.271471 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/registry-server/0.log" Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.420565 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-utilities/0.log" Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.447274 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-content/0.log" Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.448036 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-content/0.log" Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.602541 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-content/0.log" Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.647933 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-utilities/0.log" Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.842192 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hfln2_71b25ce7-0542-4bbf-a7c7-ae760345ede3/marketplace-operator/2.log" Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.877184 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.877268 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:57:29 crc kubenswrapper[4746]: I1211 10:57:29.911118 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hfln2_71b25ce7-0542-4bbf-a7c7-ae760345ede3/marketplace-operator/1.log" Dec 11 10:57:30 crc kubenswrapper[4746]: I1211 10:57:30.145677 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-utilities/0.log" Dec 11 10:57:30 crc kubenswrapper[4746]: I1211 10:57:30.308275 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/registry-server/0.log" Dec 11 10:57:30 crc kubenswrapper[4746]: I1211 10:57:30.344189 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-utilities/0.log" Dec 11 10:57:30 crc kubenswrapper[4746]: I1211 10:57:30.397234 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-content/0.log" Dec 11 10:57:30 crc kubenswrapper[4746]: I1211 10:57:30.402520 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-content/0.log" Dec 11 10:57:30 crc kubenswrapper[4746]: I1211 10:57:30.717084 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-utilities/0.log" Dec 11 10:57:30 crc kubenswrapper[4746]: I1211 10:57:30.726919 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-content/0.log" Dec 11 10:57:30 crc kubenswrapper[4746]: I1211 10:57:30.803267 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/registry-server/0.log" Dec 11 10:57:30 crc kubenswrapper[4746]: I1211 10:57:30.961101 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-utilities/0.log" Dec 11 10:57:31 crc kubenswrapper[4746]: I1211 10:57:31.144007 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-content/0.log" Dec 11 10:57:31 crc kubenswrapper[4746]: I1211 10:57:31.144019 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-utilities/0.log" Dec 11 10:57:31 crc kubenswrapper[4746]: I1211 10:57:31.160476 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-content/0.log" Dec 11 10:57:31 crc kubenswrapper[4746]: I1211 10:57:31.330164 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-utilities/0.log" Dec 11 10:57:31 crc kubenswrapper[4746]: I1211 10:57:31.348714 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-content/0.log" Dec 11 10:57:31 crc kubenswrapper[4746]: I1211 10:57:31.972941 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/registry-server/0.log" Dec 11 10:57:59 crc kubenswrapper[4746]: I1211 10:57:59.878054 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 10:57:59 crc kubenswrapper[4746]: I1211 10:57:59.878653 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 10:57:59 crc kubenswrapper[4746]: I1211 10:57:59.878693 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 10:57:59 crc kubenswrapper[4746]: I1211 10:57:59.879549 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 10:57:59 crc kubenswrapper[4746]: I1211 10:57:59.879607 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" gracePeriod=600 Dec 11 10:58:00 crc kubenswrapper[4746]: E1211 10:58:00.009792 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:58:00 crc kubenswrapper[4746]: I1211 10:58:00.155019 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" exitCode=0 Dec 11 10:58:00 crc kubenswrapper[4746]: I1211 10:58:00.155104 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba"} Dec 11 10:58:00 crc kubenswrapper[4746]: I1211 10:58:00.155181 4746 scope.go:117] "RemoveContainer" containerID="103e69231676c121cf31be18b3bc2feb4dfdceb796773042bf1afbc251fcee31" Dec 11 10:58:00 crc kubenswrapper[4746]: I1211 10:58:00.156028 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 10:58:00 crc kubenswrapper[4746]: E1211 10:58:00.156380 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:58:13 crc kubenswrapper[4746]: I1211 10:58:13.630252 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 10:58:13 crc kubenswrapper[4746]: E1211 10:58:13.630916 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:58:25 crc kubenswrapper[4746]: I1211 10:58:25.630546 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 10:58:25 crc kubenswrapper[4746]: E1211 10:58:25.631642 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:58:37 crc kubenswrapper[4746]: I1211 10:58:37.645385 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 10:58:37 crc kubenswrapper[4746]: E1211 10:58:37.646258 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:58:52 crc kubenswrapper[4746]: I1211 10:58:52.630607 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 10:58:52 crc kubenswrapper[4746]: E1211 10:58:52.631535 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:59:03 crc kubenswrapper[4746]: I1211 10:59:03.631610 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 10:59:03 crc kubenswrapper[4746]: E1211 10:59:03.632653 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:59:17 crc kubenswrapper[4746]: I1211 10:59:17.641284 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 10:59:17 crc kubenswrapper[4746]: E1211 10:59:17.642545 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:59:30 crc kubenswrapper[4746]: I1211 10:59:30.030749 4746 generic.go:334] "Generic (PLEG): container finished" podID="b749c961-8ad9-411a-90d0-f1294a614816" containerID="951d1f6138267f03ff5f29de2cf269ead8eb85f0150a7df2d8b2c0f18c8af84b" exitCode=0 Dec 11 10:59:30 crc kubenswrapper[4746]: I1211 10:59:30.030830 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jttbd/must-gather-lqxkr" event={"ID":"b749c961-8ad9-411a-90d0-f1294a614816","Type":"ContainerDied","Data":"951d1f6138267f03ff5f29de2cf269ead8eb85f0150a7df2d8b2c0f18c8af84b"} Dec 11 10:59:30 crc kubenswrapper[4746]: I1211 10:59:30.032171 4746 scope.go:117] "RemoveContainer" containerID="951d1f6138267f03ff5f29de2cf269ead8eb85f0150a7df2d8b2c0f18c8af84b" Dec 11 10:59:30 crc kubenswrapper[4746]: I1211 10:59:30.414149 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jttbd_must-gather-lqxkr_b749c961-8ad9-411a-90d0-f1294a614816/gather/0.log" Dec 11 10:59:30 crc kubenswrapper[4746]: I1211 10:59:30.630007 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 10:59:30 crc kubenswrapper[4746]: E1211 10:59:30.630756 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:59:37 crc kubenswrapper[4746]: I1211 10:59:37.983370 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-jttbd/must-gather-lqxkr"] Dec 11 10:59:37 crc kubenswrapper[4746]: I1211 10:59:37.984511 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-jttbd/must-gather-lqxkr" podUID="b749c961-8ad9-411a-90d0-f1294a614816" containerName="copy" containerID="cri-o://607f3e05925fad47a871ec76ae03de5be2bed605c05bbf880b2ff9d00ab1bd49" gracePeriod=2 Dec 11 10:59:37 crc kubenswrapper[4746]: I1211 10:59:37.994232 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-jttbd/must-gather-lqxkr"] Dec 11 10:59:38 crc kubenswrapper[4746]: I1211 10:59:38.118241 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jttbd_must-gather-lqxkr_b749c961-8ad9-411a-90d0-f1294a614816/copy/0.log" Dec 11 10:59:38 crc kubenswrapper[4746]: I1211 10:59:38.118821 4746 generic.go:334] "Generic (PLEG): container finished" podID="b749c961-8ad9-411a-90d0-f1294a614816" containerID="607f3e05925fad47a871ec76ae03de5be2bed605c05bbf880b2ff9d00ab1bd49" exitCode=143 Dec 11 10:59:38 crc kubenswrapper[4746]: I1211 10:59:38.506537 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jttbd_must-gather-lqxkr_b749c961-8ad9-411a-90d0-f1294a614816/copy/0.log" Dec 11 10:59:38 crc kubenswrapper[4746]: I1211 10:59:38.507286 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/must-gather-lqxkr" Dec 11 10:59:38 crc kubenswrapper[4746]: I1211 10:59:38.544083 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b749c961-8ad9-411a-90d0-f1294a614816-must-gather-output\") pod \"b749c961-8ad9-411a-90d0-f1294a614816\" (UID: \"b749c961-8ad9-411a-90d0-f1294a614816\") " Dec 11 10:59:38 crc kubenswrapper[4746]: I1211 10:59:38.544172 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxbxx\" (UniqueName: \"kubernetes.io/projected/b749c961-8ad9-411a-90d0-f1294a614816-kube-api-access-sxbxx\") pod \"b749c961-8ad9-411a-90d0-f1294a614816\" (UID: \"b749c961-8ad9-411a-90d0-f1294a614816\") " Dec 11 10:59:38 crc kubenswrapper[4746]: I1211 10:59:38.567319 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b749c961-8ad9-411a-90d0-f1294a614816-kube-api-access-sxbxx" (OuterVolumeSpecName: "kube-api-access-sxbxx") pod "b749c961-8ad9-411a-90d0-f1294a614816" (UID: "b749c961-8ad9-411a-90d0-f1294a614816"). InnerVolumeSpecName "kube-api-access-sxbxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 10:59:38 crc kubenswrapper[4746]: I1211 10:59:38.646875 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxbxx\" (UniqueName: \"kubernetes.io/projected/b749c961-8ad9-411a-90d0-f1294a614816-kube-api-access-sxbxx\") on node \"crc\" DevicePath \"\"" Dec 11 10:59:38 crc kubenswrapper[4746]: I1211 10:59:38.769678 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b749c961-8ad9-411a-90d0-f1294a614816-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b749c961-8ad9-411a-90d0-f1294a614816" (UID: "b749c961-8ad9-411a-90d0-f1294a614816"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 10:59:38 crc kubenswrapper[4746]: I1211 10:59:38.853246 4746 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b749c961-8ad9-411a-90d0-f1294a614816-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 10:59:39 crc kubenswrapper[4746]: I1211 10:59:39.160601 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-jttbd_must-gather-lqxkr_b749c961-8ad9-411a-90d0-f1294a614816/copy/0.log" Dec 11 10:59:39 crc kubenswrapper[4746]: I1211 10:59:39.161732 4746 scope.go:117] "RemoveContainer" containerID="607f3e05925fad47a871ec76ae03de5be2bed605c05bbf880b2ff9d00ab1bd49" Dec 11 10:59:39 crc kubenswrapper[4746]: I1211 10:59:39.162080 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jttbd/must-gather-lqxkr" Dec 11 10:59:39 crc kubenswrapper[4746]: I1211 10:59:39.220618 4746 scope.go:117] "RemoveContainer" containerID="951d1f6138267f03ff5f29de2cf269ead8eb85f0150a7df2d8b2c0f18c8af84b" Dec 11 10:59:39 crc kubenswrapper[4746]: E1211 10:59:39.263362 4746 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb749c961_8ad9_411a_90d0_f1294a614816.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb749c961_8ad9_411a_90d0_f1294a614816.slice/crio-c052e1f207163e9618fe2f099a72e5adf9c2075fdc3d4d576fc066c1fcceaebc\": RecentStats: unable to find data in memory cache]" Dec 11 10:59:39 crc kubenswrapper[4746]: I1211 10:59:39.643717 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b749c961-8ad9-411a-90d0-f1294a614816" path="/var/lib/kubelet/pods/b749c961-8ad9-411a-90d0-f1294a614816/volumes" Dec 11 10:59:45 crc kubenswrapper[4746]: I1211 10:59:45.630730 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 10:59:45 crc kubenswrapper[4746]: E1211 10:59:45.631473 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 10:59:56 crc kubenswrapper[4746]: I1211 10:59:56.630460 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 10:59:56 crc kubenswrapper[4746]: E1211 10:59:56.631452 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.258750 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r"] Dec 11 11:00:00 crc kubenswrapper[4746]: E1211 11:00:00.260769 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b749c961-8ad9-411a-90d0-f1294a614816" containerName="gather" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.260810 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b749c961-8ad9-411a-90d0-f1294a614816" containerName="gather" Dec 11 11:00:00 crc kubenswrapper[4746]: E1211 11:00:00.260860 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b749c961-8ad9-411a-90d0-f1294a614816" containerName="copy" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.260869 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="b749c961-8ad9-411a-90d0-f1294a614816" containerName="copy" Dec 11 11:00:00 crc kubenswrapper[4746]: E1211 11:00:00.260883 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80226d5-f2cc-4b1a-8369-eac06ef433cf" containerName="container-00" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.260889 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80226d5-f2cc-4b1a-8369-eac06ef433cf" containerName="container-00" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.261145 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b749c961-8ad9-411a-90d0-f1294a614816" containerName="gather" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.261167 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="b749c961-8ad9-411a-90d0-f1294a614816" containerName="copy" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.261190 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80226d5-f2cc-4b1a-8369-eac06ef433cf" containerName="container-00" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.262218 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.267580 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.267995 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.270772 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r"] Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.346220 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e28be25e-d133-4573-90d3-43baaabbcc39-secret-volume\") pod \"collect-profiles-29424180-rtg9r\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.346301 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nwxm\" (UniqueName: \"kubernetes.io/projected/e28be25e-d133-4573-90d3-43baaabbcc39-kube-api-access-2nwxm\") pod \"collect-profiles-29424180-rtg9r\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.346377 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e28be25e-d133-4573-90d3-43baaabbcc39-config-volume\") pod \"collect-profiles-29424180-rtg9r\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.448510 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nwxm\" (UniqueName: \"kubernetes.io/projected/e28be25e-d133-4573-90d3-43baaabbcc39-kube-api-access-2nwxm\") pod \"collect-profiles-29424180-rtg9r\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.448656 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e28be25e-d133-4573-90d3-43baaabbcc39-config-volume\") pod \"collect-profiles-29424180-rtg9r\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.448806 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e28be25e-d133-4573-90d3-43baaabbcc39-secret-volume\") pod \"collect-profiles-29424180-rtg9r\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.449812 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e28be25e-d133-4573-90d3-43baaabbcc39-config-volume\") pod \"collect-profiles-29424180-rtg9r\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.457299 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e28be25e-d133-4573-90d3-43baaabbcc39-secret-volume\") pod \"collect-profiles-29424180-rtg9r\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.480847 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nwxm\" (UniqueName: \"kubernetes.io/projected/e28be25e-d133-4573-90d3-43baaabbcc39-kube-api-access-2nwxm\") pod \"collect-profiles-29424180-rtg9r\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:00 crc kubenswrapper[4746]: I1211 11:00:00.585613 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:01 crc kubenswrapper[4746]: I1211 11:00:01.045268 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r"] Dec 11 11:00:01 crc kubenswrapper[4746]: I1211 11:00:01.391629 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" event={"ID":"e28be25e-d133-4573-90d3-43baaabbcc39","Type":"ContainerStarted","Data":"fb7c51dd5c39f676616f031530e290e64bc27c8b308dc284b7fdc3d782a91ee0"} Dec 11 11:00:01 crc kubenswrapper[4746]: I1211 11:00:01.392003 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" event={"ID":"e28be25e-d133-4573-90d3-43baaabbcc39","Type":"ContainerStarted","Data":"93948253c88ba70b6e3810ab8cc7c041d5f66a761800e49eeae3b23e4b4f1859"} Dec 11 11:00:01 crc kubenswrapper[4746]: I1211 11:00:01.418485 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" podStartSLOduration=1.418461491 podStartE2EDuration="1.418461491s" podCreationTimestamp="2025-12-11 11:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 11:00:01.406508168 +0000 UTC m=+3974.266371491" watchObservedRunningTime="2025-12-11 11:00:01.418461491 +0000 UTC m=+3974.278324804" Dec 11 11:00:02 crc kubenswrapper[4746]: I1211 11:00:02.402945 4746 generic.go:334] "Generic (PLEG): container finished" podID="e28be25e-d133-4573-90d3-43baaabbcc39" containerID="fb7c51dd5c39f676616f031530e290e64bc27c8b308dc284b7fdc3d782a91ee0" exitCode=0 Dec 11 11:00:02 crc kubenswrapper[4746]: I1211 11:00:02.403012 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" event={"ID":"e28be25e-d133-4573-90d3-43baaabbcc39","Type":"ContainerDied","Data":"fb7c51dd5c39f676616f031530e290e64bc27c8b308dc284b7fdc3d782a91ee0"} Dec 11 11:00:03 crc kubenswrapper[4746]: I1211 11:00:03.744884 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:03 crc kubenswrapper[4746]: I1211 11:00:03.823307 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e28be25e-d133-4573-90d3-43baaabbcc39-config-volume\") pod \"e28be25e-d133-4573-90d3-43baaabbcc39\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " Dec 11 11:00:03 crc kubenswrapper[4746]: I1211 11:00:03.823407 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e28be25e-d133-4573-90d3-43baaabbcc39-secret-volume\") pod \"e28be25e-d133-4573-90d3-43baaabbcc39\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " Dec 11 11:00:03 crc kubenswrapper[4746]: I1211 11:00:03.823458 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nwxm\" (UniqueName: \"kubernetes.io/projected/e28be25e-d133-4573-90d3-43baaabbcc39-kube-api-access-2nwxm\") pod \"e28be25e-d133-4573-90d3-43baaabbcc39\" (UID: \"e28be25e-d133-4573-90d3-43baaabbcc39\") " Dec 11 11:00:03 crc kubenswrapper[4746]: I1211 11:00:03.824425 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e28be25e-d133-4573-90d3-43baaabbcc39-config-volume" (OuterVolumeSpecName: "config-volume") pod "e28be25e-d133-4573-90d3-43baaabbcc39" (UID: "e28be25e-d133-4573-90d3-43baaabbcc39"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 11 11:00:03 crc kubenswrapper[4746]: I1211 11:00:03.830626 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e28be25e-d133-4573-90d3-43baaabbcc39-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e28be25e-d133-4573-90d3-43baaabbcc39" (UID: "e28be25e-d133-4573-90d3-43baaabbcc39"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 11:00:03 crc kubenswrapper[4746]: I1211 11:00:03.835774 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28be25e-d133-4573-90d3-43baaabbcc39-kube-api-access-2nwxm" (OuterVolumeSpecName: "kube-api-access-2nwxm") pod "e28be25e-d133-4573-90d3-43baaabbcc39" (UID: "e28be25e-d133-4573-90d3-43baaabbcc39"). InnerVolumeSpecName "kube-api-access-2nwxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:00:03 crc kubenswrapper[4746]: I1211 11:00:03.925751 4746 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e28be25e-d133-4573-90d3-43baaabbcc39-config-volume\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:03 crc kubenswrapper[4746]: I1211 11:00:03.925787 4746 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e28be25e-d133-4573-90d3-43baaabbcc39-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:03 crc kubenswrapper[4746]: I1211 11:00:03.925798 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nwxm\" (UniqueName: \"kubernetes.io/projected/e28be25e-d133-4573-90d3-43baaabbcc39-kube-api-access-2nwxm\") on node \"crc\" DevicePath \"\"" Dec 11 11:00:04 crc kubenswrapper[4746]: I1211 11:00:04.421818 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" event={"ID":"e28be25e-d133-4573-90d3-43baaabbcc39","Type":"ContainerDied","Data":"93948253c88ba70b6e3810ab8cc7c041d5f66a761800e49eeae3b23e4b4f1859"} Dec 11 11:00:04 crc kubenswrapper[4746]: I1211 11:00:04.422156 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93948253c88ba70b6e3810ab8cc7c041d5f66a761800e49eeae3b23e4b4f1859" Dec 11 11:00:04 crc kubenswrapper[4746]: I1211 11:00:04.421912 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29424180-rtg9r" Dec 11 11:00:04 crc kubenswrapper[4746]: I1211 11:00:04.486528 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2"] Dec 11 11:00:04 crc kubenswrapper[4746]: I1211 11:00:04.496405 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29424135-rkgn2"] Dec 11 11:00:05 crc kubenswrapper[4746]: I1211 11:00:05.642266 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf1663d1-5ec6-49fc-bab8-f2b102daba0a" path="/var/lib/kubelet/pods/bf1663d1-5ec6-49fc-bab8-f2b102daba0a/volumes" Dec 11 11:00:09 crc kubenswrapper[4746]: I1211 11:00:09.631233 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:00:09 crc kubenswrapper[4746]: E1211 11:00:09.632036 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:00:22 crc kubenswrapper[4746]: I1211 11:00:22.631440 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:00:22 crc kubenswrapper[4746]: E1211 11:00:22.632330 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:00:36 crc kubenswrapper[4746]: I1211 11:00:36.631232 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:00:36 crc kubenswrapper[4746]: E1211 11:00:36.632426 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:00:51 crc kubenswrapper[4746]: I1211 11:00:51.632042 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:00:51 crc kubenswrapper[4746]: E1211 11:00:51.632933 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.151626 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29424181-sklrt"] Dec 11 11:01:00 crc kubenswrapper[4746]: E1211 11:01:00.152957 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28be25e-d133-4573-90d3-43baaabbcc39" containerName="collect-profiles" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.152981 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28be25e-d133-4573-90d3-43baaabbcc39" containerName="collect-profiles" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.153309 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28be25e-d133-4573-90d3-43baaabbcc39" containerName="collect-profiles" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.154406 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.162926 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29424181-sklrt"] Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.309287 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-fernet-keys\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.309368 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-config-data\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.309396 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-combined-ca-bundle\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.310286 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tc7w\" (UniqueName: \"kubernetes.io/projected/47208b16-9400-4363-97f2-fd70600a8430-kube-api-access-5tc7w\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.412699 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-fernet-keys\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.412754 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-config-data\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.412795 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-combined-ca-bundle\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.412870 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tc7w\" (UniqueName: \"kubernetes.io/projected/47208b16-9400-4363-97f2-fd70600a8430-kube-api-access-5tc7w\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.419598 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-combined-ca-bundle\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.419937 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-config-data\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.423846 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-fernet-keys\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.430697 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tc7w\" (UniqueName: \"kubernetes.io/projected/47208b16-9400-4363-97f2-fd70600a8430-kube-api-access-5tc7w\") pod \"keystone-cron-29424181-sklrt\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.477680 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:00 crc kubenswrapper[4746]: I1211 11:01:00.934956 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29424181-sklrt"] Dec 11 11:01:01 crc kubenswrapper[4746]: I1211 11:01:01.986712 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424181-sklrt" event={"ID":"47208b16-9400-4363-97f2-fd70600a8430","Type":"ContainerStarted","Data":"a51da6348dbc7d73ebd438963a816a14a8efef7d1384c1c1ca2efabda016d643"} Dec 11 11:01:01 crc kubenswrapper[4746]: I1211 11:01:01.987159 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424181-sklrt" event={"ID":"47208b16-9400-4363-97f2-fd70600a8430","Type":"ContainerStarted","Data":"a40f5c42d6ebf78e90cd1182ea4bcb373ddc04274765ff04d4b18ef23e03e750"} Dec 11 11:01:02 crc kubenswrapper[4746]: I1211 11:01:02.007533 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29424181-sklrt" podStartSLOduration=2.007509994 podStartE2EDuration="2.007509994s" podCreationTimestamp="2025-12-11 11:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 11:01:02.006203639 +0000 UTC m=+4034.866066962" watchObservedRunningTime="2025-12-11 11:01:02.007509994 +0000 UTC m=+4034.867373307" Dec 11 11:01:02 crc kubenswrapper[4746]: I1211 11:01:02.630735 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:01:02 crc kubenswrapper[4746]: E1211 11:01:02.631421 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:01:05 crc kubenswrapper[4746]: I1211 11:01:05.019790 4746 generic.go:334] "Generic (PLEG): container finished" podID="47208b16-9400-4363-97f2-fd70600a8430" containerID="a51da6348dbc7d73ebd438963a816a14a8efef7d1384c1c1ca2efabda016d643" exitCode=0 Dec 11 11:01:05 crc kubenswrapper[4746]: I1211 11:01:05.019928 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424181-sklrt" event={"ID":"47208b16-9400-4363-97f2-fd70600a8430","Type":"ContainerDied","Data":"a51da6348dbc7d73ebd438963a816a14a8efef7d1384c1c1ca2efabda016d643"} Dec 11 11:01:05 crc kubenswrapper[4746]: I1211 11:01:05.517158 4746 scope.go:117] "RemoveContainer" containerID="afb9639e9034d99cabe2b4c1e7d5d2d8d22c89154e2307cdbe15052f532bc3c3" Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.529181 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.714692 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-combined-ca-bundle\") pod \"47208b16-9400-4363-97f2-fd70600a8430\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.714795 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-config-data\") pod \"47208b16-9400-4363-97f2-fd70600a8430\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.714952 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tc7w\" (UniqueName: \"kubernetes.io/projected/47208b16-9400-4363-97f2-fd70600a8430-kube-api-access-5tc7w\") pod \"47208b16-9400-4363-97f2-fd70600a8430\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.715064 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-fernet-keys\") pod \"47208b16-9400-4363-97f2-fd70600a8430\" (UID: \"47208b16-9400-4363-97f2-fd70600a8430\") " Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.722116 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47208b16-9400-4363-97f2-fd70600a8430-kube-api-access-5tc7w" (OuterVolumeSpecName: "kube-api-access-5tc7w") pod "47208b16-9400-4363-97f2-fd70600a8430" (UID: "47208b16-9400-4363-97f2-fd70600a8430"). InnerVolumeSpecName "kube-api-access-5tc7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.726231 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "47208b16-9400-4363-97f2-fd70600a8430" (UID: "47208b16-9400-4363-97f2-fd70600a8430"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.744705 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47208b16-9400-4363-97f2-fd70600a8430" (UID: "47208b16-9400-4363-97f2-fd70600a8430"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.772386 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-config-data" (OuterVolumeSpecName: "config-data") pod "47208b16-9400-4363-97f2-fd70600a8430" (UID: "47208b16-9400-4363-97f2-fd70600a8430"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.817853 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tc7w\" (UniqueName: \"kubernetes.io/projected/47208b16-9400-4363-97f2-fd70600a8430-kube-api-access-5tc7w\") on node \"crc\" DevicePath \"\"" Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.817895 4746 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.817908 4746 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 11 11:01:06 crc kubenswrapper[4746]: I1211 11:01:06.817923 4746 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47208b16-9400-4363-97f2-fd70600a8430-config-data\") on node \"crc\" DevicePath \"\"" Dec 11 11:01:07 crc kubenswrapper[4746]: I1211 11:01:07.043477 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29424181-sklrt" event={"ID":"47208b16-9400-4363-97f2-fd70600a8430","Type":"ContainerDied","Data":"a40f5c42d6ebf78e90cd1182ea4bcb373ddc04274765ff04d4b18ef23e03e750"} Dec 11 11:01:07 crc kubenswrapper[4746]: I1211 11:01:07.043546 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a40f5c42d6ebf78e90cd1182ea4bcb373ddc04274765ff04d4b18ef23e03e750" Dec 11 11:01:07 crc kubenswrapper[4746]: I1211 11:01:07.043507 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29424181-sklrt" Dec 11 11:01:17 crc kubenswrapper[4746]: I1211 11:01:17.641435 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:01:17 crc kubenswrapper[4746]: E1211 11:01:17.642411 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:01:32 crc kubenswrapper[4746]: I1211 11:01:32.630364 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:01:32 crc kubenswrapper[4746]: E1211 11:01:32.631273 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:01:44 crc kubenswrapper[4746]: I1211 11:01:44.631743 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:01:44 crc kubenswrapper[4746]: E1211 11:01:44.632473 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.201984 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4hkbz"] Dec 11 11:01:46 crc kubenswrapper[4746]: E1211 11:01:46.202982 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47208b16-9400-4363-97f2-fd70600a8430" containerName="keystone-cron" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.203004 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="47208b16-9400-4363-97f2-fd70600a8430" containerName="keystone-cron" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.203353 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="47208b16-9400-4363-97f2-fd70600a8430" containerName="keystone-cron" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.205690 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.228162 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hkbz"] Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.295489 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82wvl\" (UniqueName: \"kubernetes.io/projected/95ec6de4-4578-4ebc-9485-15161973a3e1-kube-api-access-82wvl\") pod \"redhat-operators-4hkbz\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.295589 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-catalog-content\") pod \"redhat-operators-4hkbz\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.295692 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-utilities\") pod \"redhat-operators-4hkbz\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.398657 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-catalog-content\") pod \"redhat-operators-4hkbz\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.398791 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-utilities\") pod \"redhat-operators-4hkbz\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.398858 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82wvl\" (UniqueName: \"kubernetes.io/projected/95ec6de4-4578-4ebc-9485-15161973a3e1-kube-api-access-82wvl\") pod \"redhat-operators-4hkbz\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.399943 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-catalog-content\") pod \"redhat-operators-4hkbz\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.400466 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-utilities\") pod \"redhat-operators-4hkbz\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.434123 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82wvl\" (UniqueName: \"kubernetes.io/projected/95ec6de4-4578-4ebc-9485-15161973a3e1-kube-api-access-82wvl\") pod \"redhat-operators-4hkbz\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:46 crc kubenswrapper[4746]: I1211 11:01:46.538578 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:47 crc kubenswrapper[4746]: I1211 11:01:47.459902 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hkbz"] Dec 11 11:01:48 crc kubenswrapper[4746]: I1211 11:01:48.442015 4746 generic.go:334] "Generic (PLEG): container finished" podID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerID="9a2833c0bb55345770c8c9129cb2bb24692266349b4702d22203983397e0df97" exitCode=0 Dec 11 11:01:48 crc kubenswrapper[4746]: I1211 11:01:48.442093 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hkbz" event={"ID":"95ec6de4-4578-4ebc-9485-15161973a3e1","Type":"ContainerDied","Data":"9a2833c0bb55345770c8c9129cb2bb24692266349b4702d22203983397e0df97"} Dec 11 11:01:48 crc kubenswrapper[4746]: I1211 11:01:48.442428 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hkbz" event={"ID":"95ec6de4-4578-4ebc-9485-15161973a3e1","Type":"ContainerStarted","Data":"a285cb3ad269f554cd4348ad184cce6ae4f2595f80b384e256a45ada2649e2ba"} Dec 11 11:01:48 crc kubenswrapper[4746]: I1211 11:01:48.443901 4746 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 11 11:01:49 crc kubenswrapper[4746]: I1211 11:01:49.452728 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hkbz" event={"ID":"95ec6de4-4578-4ebc-9485-15161973a3e1","Type":"ContainerStarted","Data":"333cd023ae585e8d91878581f9cdde7ea769e93561beac9b84b0a43967235b60"} Dec 11 11:01:50 crc kubenswrapper[4746]: I1211 11:01:50.469035 4746 generic.go:334] "Generic (PLEG): container finished" podID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerID="333cd023ae585e8d91878581f9cdde7ea769e93561beac9b84b0a43967235b60" exitCode=0 Dec 11 11:01:50 crc kubenswrapper[4746]: I1211 11:01:50.469172 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hkbz" event={"ID":"95ec6de4-4578-4ebc-9485-15161973a3e1","Type":"ContainerDied","Data":"333cd023ae585e8d91878581f9cdde7ea769e93561beac9b84b0a43967235b60"} Dec 11 11:01:51 crc kubenswrapper[4746]: I1211 11:01:51.485248 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hkbz" event={"ID":"95ec6de4-4578-4ebc-9485-15161973a3e1","Type":"ContainerStarted","Data":"d409762d4b9459193a756d38757c8e17729c3d32da3dd1a63732261342d00dac"} Dec 11 11:01:51 crc kubenswrapper[4746]: I1211 11:01:51.514719 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4hkbz" podStartSLOduration=2.76258135 podStartE2EDuration="5.51469645s" podCreationTimestamp="2025-12-11 11:01:46 +0000 UTC" firstStartedPulling="2025-12-11 11:01:48.443641653 +0000 UTC m=+4081.303504966" lastFinishedPulling="2025-12-11 11:01:51.195756753 +0000 UTC m=+4084.055620066" observedRunningTime="2025-12-11 11:01:51.504964837 +0000 UTC m=+4084.364828150" watchObservedRunningTime="2025-12-11 11:01:51.51469645 +0000 UTC m=+4084.374559763" Dec 11 11:01:56 crc kubenswrapper[4746]: I1211 11:01:56.539349 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:56 crc kubenswrapper[4746]: I1211 11:01:56.540676 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:01:57 crc kubenswrapper[4746]: I1211 11:01:57.617419 4746 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4hkbz" podUID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerName="registry-server" probeResult="failure" output=< Dec 11 11:01:57 crc kubenswrapper[4746]: timeout: failed to connect service ":50051" within 1s Dec 11 11:01:57 crc kubenswrapper[4746]: > Dec 11 11:01:58 crc kubenswrapper[4746]: I1211 11:01:58.630659 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:01:58 crc kubenswrapper[4746]: E1211 11:01:58.632267 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:02:06 crc kubenswrapper[4746]: I1211 11:02:06.617865 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:02:06 crc kubenswrapper[4746]: I1211 11:02:06.691383 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:02:06 crc kubenswrapper[4746]: I1211 11:02:06.878399 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hkbz"] Dec 11 11:02:08 crc kubenswrapper[4746]: I1211 11:02:08.645769 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4hkbz" podUID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerName="registry-server" containerID="cri-o://d409762d4b9459193a756d38757c8e17729c3d32da3dd1a63732261342d00dac" gracePeriod=2 Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.661500 4746 generic.go:334] "Generic (PLEG): container finished" podID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerID="d409762d4b9459193a756d38757c8e17729c3d32da3dd1a63732261342d00dac" exitCode=0 Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.661699 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hkbz" event={"ID":"95ec6de4-4578-4ebc-9485-15161973a3e1","Type":"ContainerDied","Data":"d409762d4b9459193a756d38757c8e17729c3d32da3dd1a63732261342d00dac"} Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.661977 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hkbz" event={"ID":"95ec6de4-4578-4ebc-9485-15161973a3e1","Type":"ContainerDied","Data":"a285cb3ad269f554cd4348ad184cce6ae4f2595f80b384e256a45ada2649e2ba"} Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.661997 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a285cb3ad269f554cd4348ad184cce6ae4f2595f80b384e256a45ada2649e2ba" Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.739550 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.871195 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-catalog-content\") pod \"95ec6de4-4578-4ebc-9485-15161973a3e1\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.871324 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82wvl\" (UniqueName: \"kubernetes.io/projected/95ec6de4-4578-4ebc-9485-15161973a3e1-kube-api-access-82wvl\") pod \"95ec6de4-4578-4ebc-9485-15161973a3e1\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.871463 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-utilities\") pod \"95ec6de4-4578-4ebc-9485-15161973a3e1\" (UID: \"95ec6de4-4578-4ebc-9485-15161973a3e1\") " Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.873938 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-utilities" (OuterVolumeSpecName: "utilities") pod "95ec6de4-4578-4ebc-9485-15161973a3e1" (UID: "95ec6de4-4578-4ebc-9485-15161973a3e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.879485 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95ec6de4-4578-4ebc-9485-15161973a3e1-kube-api-access-82wvl" (OuterVolumeSpecName: "kube-api-access-82wvl") pod "95ec6de4-4578-4ebc-9485-15161973a3e1" (UID: "95ec6de4-4578-4ebc-9485-15161973a3e1"). InnerVolumeSpecName "kube-api-access-82wvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.974461 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82wvl\" (UniqueName: \"kubernetes.io/projected/95ec6de4-4578-4ebc-9485-15161973a3e1-kube-api-access-82wvl\") on node \"crc\" DevicePath \"\"" Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.974502 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:02:09 crc kubenswrapper[4746]: I1211 11:02:09.998818 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95ec6de4-4578-4ebc-9485-15161973a3e1" (UID: "95ec6de4-4578-4ebc-9485-15161973a3e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:02:10 crc kubenswrapper[4746]: I1211 11:02:10.076758 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95ec6de4-4578-4ebc-9485-15161973a3e1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:02:10 crc kubenswrapper[4746]: I1211 11:02:10.631597 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:02:10 crc kubenswrapper[4746]: E1211 11:02:10.631976 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:02:10 crc kubenswrapper[4746]: I1211 11:02:10.673622 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hkbz" Dec 11 11:02:10 crc kubenswrapper[4746]: I1211 11:02:10.726100 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hkbz"] Dec 11 11:02:10 crc kubenswrapper[4746]: I1211 11:02:10.736252 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4hkbz"] Dec 11 11:02:11 crc kubenswrapper[4746]: I1211 11:02:11.647010 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95ec6de4-4578-4ebc-9485-15161973a3e1" path="/var/lib/kubelet/pods/95ec6de4-4578-4ebc-9485-15161973a3e1/volumes" Dec 11 11:02:22 crc kubenswrapper[4746]: I1211 11:02:22.630863 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:02:22 crc kubenswrapper[4746]: E1211 11:02:22.631748 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:02:34 crc kubenswrapper[4746]: I1211 11:02:34.630294 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:02:34 crc kubenswrapper[4746]: E1211 11:02:34.631019 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:02:39 crc kubenswrapper[4746]: I1211 11:02:39.979088 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-76nst/must-gather-dxpv8"] Dec 11 11:02:39 crc kubenswrapper[4746]: E1211 11:02:39.982021 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerName="registry-server" Dec 11 11:02:39 crc kubenswrapper[4746]: I1211 11:02:39.982071 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerName="registry-server" Dec 11 11:02:39 crc kubenswrapper[4746]: E1211 11:02:39.982113 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerName="extract-content" Dec 11 11:02:39 crc kubenswrapper[4746]: I1211 11:02:39.982133 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerName="extract-content" Dec 11 11:02:39 crc kubenswrapper[4746]: E1211 11:02:39.982141 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerName="extract-utilities" Dec 11 11:02:39 crc kubenswrapper[4746]: I1211 11:02:39.982150 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerName="extract-utilities" Dec 11 11:02:39 crc kubenswrapper[4746]: I1211 11:02:39.984140 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="95ec6de4-4578-4ebc-9485-15161973a3e1" containerName="registry-server" Dec 11 11:02:39 crc kubenswrapper[4746]: I1211 11:02:39.988429 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/must-gather-dxpv8" Dec 11 11:02:39 crc kubenswrapper[4746]: I1211 11:02:39.995011 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-76nst"/"kube-root-ca.crt" Dec 11 11:02:39 crc kubenswrapper[4746]: I1211 11:02:39.997163 4746 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-76nst"/"openshift-service-ca.crt" Dec 11 11:02:40 crc kubenswrapper[4746]: I1211 11:02:40.004178 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwdg4\" (UniqueName: \"kubernetes.io/projected/bf2bf3e5-4313-4676-91af-0f30747037ca-kube-api-access-mwdg4\") pod \"must-gather-dxpv8\" (UID: \"bf2bf3e5-4313-4676-91af-0f30747037ca\") " pod="openshift-must-gather-76nst/must-gather-dxpv8" Dec 11 11:02:40 crc kubenswrapper[4746]: I1211 11:02:40.005191 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf2bf3e5-4313-4676-91af-0f30747037ca-must-gather-output\") pod \"must-gather-dxpv8\" (UID: \"bf2bf3e5-4313-4676-91af-0f30747037ca\") " pod="openshift-must-gather-76nst/must-gather-dxpv8" Dec 11 11:02:40 crc kubenswrapper[4746]: I1211 11:02:40.025674 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-76nst/must-gather-dxpv8"] Dec 11 11:02:40 crc kubenswrapper[4746]: I1211 11:02:40.107308 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwdg4\" (UniqueName: \"kubernetes.io/projected/bf2bf3e5-4313-4676-91af-0f30747037ca-kube-api-access-mwdg4\") pod \"must-gather-dxpv8\" (UID: \"bf2bf3e5-4313-4676-91af-0f30747037ca\") " pod="openshift-must-gather-76nst/must-gather-dxpv8" Dec 11 11:02:40 crc kubenswrapper[4746]: I1211 11:02:40.107550 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf2bf3e5-4313-4676-91af-0f30747037ca-must-gather-output\") pod \"must-gather-dxpv8\" (UID: \"bf2bf3e5-4313-4676-91af-0f30747037ca\") " pod="openshift-must-gather-76nst/must-gather-dxpv8" Dec 11 11:02:40 crc kubenswrapper[4746]: I1211 11:02:40.108119 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf2bf3e5-4313-4676-91af-0f30747037ca-must-gather-output\") pod \"must-gather-dxpv8\" (UID: \"bf2bf3e5-4313-4676-91af-0f30747037ca\") " pod="openshift-must-gather-76nst/must-gather-dxpv8" Dec 11 11:02:40 crc kubenswrapper[4746]: I1211 11:02:40.519444 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwdg4\" (UniqueName: \"kubernetes.io/projected/bf2bf3e5-4313-4676-91af-0f30747037ca-kube-api-access-mwdg4\") pod \"must-gather-dxpv8\" (UID: \"bf2bf3e5-4313-4676-91af-0f30747037ca\") " pod="openshift-must-gather-76nst/must-gather-dxpv8" Dec 11 11:02:40 crc kubenswrapper[4746]: I1211 11:02:40.629877 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/must-gather-dxpv8" Dec 11 11:02:41 crc kubenswrapper[4746]: I1211 11:02:41.159584 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-76nst/must-gather-dxpv8"] Dec 11 11:02:42 crc kubenswrapper[4746]: I1211 11:02:42.017916 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/must-gather-dxpv8" event={"ID":"bf2bf3e5-4313-4676-91af-0f30747037ca","Type":"ContainerStarted","Data":"95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5"} Dec 11 11:02:42 crc kubenswrapper[4746]: I1211 11:02:42.017994 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/must-gather-dxpv8" event={"ID":"bf2bf3e5-4313-4676-91af-0f30747037ca","Type":"ContainerStarted","Data":"bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44"} Dec 11 11:02:42 crc kubenswrapper[4746]: I1211 11:02:42.018028 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/must-gather-dxpv8" event={"ID":"bf2bf3e5-4313-4676-91af-0f30747037ca","Type":"ContainerStarted","Data":"c255c570e51677253b9954d711948f7bfdeaa8572a6ee35de0120ec572729d1d"} Dec 11 11:02:42 crc kubenswrapper[4746]: I1211 11:02:42.047468 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-76nst/must-gather-dxpv8" podStartSLOduration=3.04743607 podStartE2EDuration="3.04743607s" podCreationTimestamp="2025-12-11 11:02:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 11:02:42.039069884 +0000 UTC m=+4134.898933197" watchObservedRunningTime="2025-12-11 11:02:42.04743607 +0000 UTC m=+4134.907299383" Dec 11 11:02:45 crc kubenswrapper[4746]: I1211 11:02:45.641213 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-76nst/crc-debug-nj695"] Dec 11 11:02:45 crc kubenswrapper[4746]: I1211 11:02:45.644513 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-nj695" Dec 11 11:02:45 crc kubenswrapper[4746]: I1211 11:02:45.647745 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-76nst"/"default-dockercfg-c9f48" Dec 11 11:02:45 crc kubenswrapper[4746]: I1211 11:02:45.757511 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/185ea57e-f9c3-4d13-9501-424880b8fe08-host\") pod \"crc-debug-nj695\" (UID: \"185ea57e-f9c3-4d13-9501-424880b8fe08\") " pod="openshift-must-gather-76nst/crc-debug-nj695" Dec 11 11:02:45 crc kubenswrapper[4746]: I1211 11:02:45.757933 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr4qp\" (UniqueName: \"kubernetes.io/projected/185ea57e-f9c3-4d13-9501-424880b8fe08-kube-api-access-wr4qp\") pod \"crc-debug-nj695\" (UID: \"185ea57e-f9c3-4d13-9501-424880b8fe08\") " pod="openshift-must-gather-76nst/crc-debug-nj695" Dec 11 11:02:45 crc kubenswrapper[4746]: I1211 11:02:45.861126 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/185ea57e-f9c3-4d13-9501-424880b8fe08-host\") pod \"crc-debug-nj695\" (UID: \"185ea57e-f9c3-4d13-9501-424880b8fe08\") " pod="openshift-must-gather-76nst/crc-debug-nj695" Dec 11 11:02:45 crc kubenswrapper[4746]: I1211 11:02:45.861256 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr4qp\" (UniqueName: \"kubernetes.io/projected/185ea57e-f9c3-4d13-9501-424880b8fe08-kube-api-access-wr4qp\") pod \"crc-debug-nj695\" (UID: \"185ea57e-f9c3-4d13-9501-424880b8fe08\") " pod="openshift-must-gather-76nst/crc-debug-nj695" Dec 11 11:02:45 crc kubenswrapper[4746]: I1211 11:02:45.861300 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/185ea57e-f9c3-4d13-9501-424880b8fe08-host\") pod \"crc-debug-nj695\" (UID: \"185ea57e-f9c3-4d13-9501-424880b8fe08\") " pod="openshift-must-gather-76nst/crc-debug-nj695" Dec 11 11:02:45 crc kubenswrapper[4746]: I1211 11:02:45.884394 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr4qp\" (UniqueName: \"kubernetes.io/projected/185ea57e-f9c3-4d13-9501-424880b8fe08-kube-api-access-wr4qp\") pod \"crc-debug-nj695\" (UID: \"185ea57e-f9c3-4d13-9501-424880b8fe08\") " pod="openshift-must-gather-76nst/crc-debug-nj695" Dec 11 11:02:45 crc kubenswrapper[4746]: I1211 11:02:45.968605 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-nj695" Dec 11 11:02:46 crc kubenswrapper[4746]: I1211 11:02:46.056841 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/crc-debug-nj695" event={"ID":"185ea57e-f9c3-4d13-9501-424880b8fe08","Type":"ContainerStarted","Data":"bdbc7ac512039740a34bad79687810d1d9574865e97a8676772cf403add63cf4"} Dec 11 11:02:47 crc kubenswrapper[4746]: I1211 11:02:47.069020 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/crc-debug-nj695" event={"ID":"185ea57e-f9c3-4d13-9501-424880b8fe08","Type":"ContainerStarted","Data":"bbf24cf6e0c366d7967761a25966b284483010a1b5e160fffec2db684e732693"} Dec 11 11:02:47 crc kubenswrapper[4746]: I1211 11:02:47.088416 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-76nst/crc-debug-nj695" podStartSLOduration=2.088391156 podStartE2EDuration="2.088391156s" podCreationTimestamp="2025-12-11 11:02:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-11 11:02:47.081716976 +0000 UTC m=+4139.941580309" watchObservedRunningTime="2025-12-11 11:02:47.088391156 +0000 UTC m=+4139.948254469" Dec 11 11:02:49 crc kubenswrapper[4746]: I1211 11:02:49.630845 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:02:49 crc kubenswrapper[4746]: E1211 11:02:49.631724 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.275616 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gwpfm"] Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.278893 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.305991 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gwpfm"] Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.320304 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-catalog-content\") pod \"community-operators-gwpfm\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.320378 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppvm2\" (UniqueName: \"kubernetes.io/projected/947409ad-389f-4bae-bb47-021f76688c59-kube-api-access-ppvm2\") pod \"community-operators-gwpfm\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.320500 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-utilities\") pod \"community-operators-gwpfm\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.422062 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-catalog-content\") pod \"community-operators-gwpfm\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.422148 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppvm2\" (UniqueName: \"kubernetes.io/projected/947409ad-389f-4bae-bb47-021f76688c59-kube-api-access-ppvm2\") pod \"community-operators-gwpfm\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.422228 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-utilities\") pod \"community-operators-gwpfm\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.422704 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-catalog-content\") pod \"community-operators-gwpfm\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.422966 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-utilities\") pod \"community-operators-gwpfm\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.451530 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppvm2\" (UniqueName: \"kubernetes.io/projected/947409ad-389f-4bae-bb47-021f76688c59-kube-api-access-ppvm2\") pod \"community-operators-gwpfm\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:51 crc kubenswrapper[4746]: I1211 11:02:51.600478 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:02:52 crc kubenswrapper[4746]: I1211 11:02:52.755165 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gwpfm"] Dec 11 11:02:53 crc kubenswrapper[4746]: I1211 11:02:53.124570 4746 generic.go:334] "Generic (PLEG): container finished" podID="947409ad-389f-4bae-bb47-021f76688c59" containerID="03b881bec57f4c57de4f42562551c94ba2d86876a06a079b10fa37c3a885d5f8" exitCode=0 Dec 11 11:02:53 crc kubenswrapper[4746]: I1211 11:02:53.124632 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwpfm" event={"ID":"947409ad-389f-4bae-bb47-021f76688c59","Type":"ContainerDied","Data":"03b881bec57f4c57de4f42562551c94ba2d86876a06a079b10fa37c3a885d5f8"} Dec 11 11:02:53 crc kubenswrapper[4746]: I1211 11:02:53.124659 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwpfm" event={"ID":"947409ad-389f-4bae-bb47-021f76688c59","Type":"ContainerStarted","Data":"733e0b8d43b4bf4338405b4dc5c04035a2bee8a283f4ed7f301c484913f7ccc6"} Dec 11 11:02:55 crc kubenswrapper[4746]: I1211 11:02:55.152800 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwpfm" event={"ID":"947409ad-389f-4bae-bb47-021f76688c59","Type":"ContainerStarted","Data":"72cf47075713a1b232d868f9f65733eb901d6e64615b0d08514ee955fec9230b"} Dec 11 11:02:56 crc kubenswrapper[4746]: I1211 11:02:56.162778 4746 generic.go:334] "Generic (PLEG): container finished" podID="947409ad-389f-4bae-bb47-021f76688c59" containerID="72cf47075713a1b232d868f9f65733eb901d6e64615b0d08514ee955fec9230b" exitCode=0 Dec 11 11:02:56 crc kubenswrapper[4746]: I1211 11:02:56.162827 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwpfm" event={"ID":"947409ad-389f-4bae-bb47-021f76688c59","Type":"ContainerDied","Data":"72cf47075713a1b232d868f9f65733eb901d6e64615b0d08514ee955fec9230b"} Dec 11 11:02:58 crc kubenswrapper[4746]: I1211 11:02:58.188065 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwpfm" event={"ID":"947409ad-389f-4bae-bb47-021f76688c59","Type":"ContainerStarted","Data":"3b3866be010716c364af44f65ab1f55e58ccce6e9d6d46ddd9c8a0e810bc50be"} Dec 11 11:02:58 crc kubenswrapper[4746]: I1211 11:02:58.216265 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gwpfm" podStartSLOduration=3.42073573 podStartE2EDuration="7.216236717s" podCreationTimestamp="2025-12-11 11:02:51 +0000 UTC" firstStartedPulling="2025-12-11 11:02:53.126929585 +0000 UTC m=+4145.986792898" lastFinishedPulling="2025-12-11 11:02:56.922430562 +0000 UTC m=+4149.782293885" observedRunningTime="2025-12-11 11:02:58.205396724 +0000 UTC m=+4151.065260047" watchObservedRunningTime="2025-12-11 11:02:58.216236717 +0000 UTC m=+4151.076100030" Dec 11 11:03:00 crc kubenswrapper[4746]: I1211 11:03:00.631221 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:03:01 crc kubenswrapper[4746]: I1211 11:03:01.215658 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"ed209c0846292b30a240d291d5bc22ab51a73bb55d6b151c1f8bd1bd450c7707"} Dec 11 11:03:01 crc kubenswrapper[4746]: I1211 11:03:01.601211 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:03:01 crc kubenswrapper[4746]: I1211 11:03:01.601281 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:03:01 crc kubenswrapper[4746]: I1211 11:03:01.656737 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:03:02 crc kubenswrapper[4746]: I1211 11:03:02.277632 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:03:02 crc kubenswrapper[4746]: I1211 11:03:02.334301 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gwpfm"] Dec 11 11:03:04 crc kubenswrapper[4746]: I1211 11:03:04.244621 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gwpfm" podUID="947409ad-389f-4bae-bb47-021f76688c59" containerName="registry-server" containerID="cri-o://3b3866be010716c364af44f65ab1f55e58ccce6e9d6d46ddd9c8a0e810bc50be" gracePeriod=2 Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.268940 4746 generic.go:334] "Generic (PLEG): container finished" podID="947409ad-389f-4bae-bb47-021f76688c59" containerID="3b3866be010716c364af44f65ab1f55e58ccce6e9d6d46ddd9c8a0e810bc50be" exitCode=0 Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.269024 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwpfm" event={"ID":"947409ad-389f-4bae-bb47-021f76688c59","Type":"ContainerDied","Data":"3b3866be010716c364af44f65ab1f55e58ccce6e9d6d46ddd9c8a0e810bc50be"} Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.269501 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gwpfm" event={"ID":"947409ad-389f-4bae-bb47-021f76688c59","Type":"ContainerDied","Data":"733e0b8d43b4bf4338405b4dc5c04035a2bee8a283f4ed7f301c484913f7ccc6"} Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.269523 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="733e0b8d43b4bf4338405b4dc5c04035a2bee8a283f4ed7f301c484913f7ccc6" Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.342427 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.513446 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-utilities\") pod \"947409ad-389f-4bae-bb47-021f76688c59\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.513982 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-catalog-content\") pod \"947409ad-389f-4bae-bb47-021f76688c59\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.514024 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppvm2\" (UniqueName: \"kubernetes.io/projected/947409ad-389f-4bae-bb47-021f76688c59-kube-api-access-ppvm2\") pod \"947409ad-389f-4bae-bb47-021f76688c59\" (UID: \"947409ad-389f-4bae-bb47-021f76688c59\") " Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.514557 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-utilities" (OuterVolumeSpecName: "utilities") pod "947409ad-389f-4bae-bb47-021f76688c59" (UID: "947409ad-389f-4bae-bb47-021f76688c59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.526515 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/947409ad-389f-4bae-bb47-021f76688c59-kube-api-access-ppvm2" (OuterVolumeSpecName: "kube-api-access-ppvm2") pod "947409ad-389f-4bae-bb47-021f76688c59" (UID: "947409ad-389f-4bae-bb47-021f76688c59"). InnerVolumeSpecName "kube-api-access-ppvm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.596996 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "947409ad-389f-4bae-bb47-021f76688c59" (UID: "947409ad-389f-4bae-bb47-021f76688c59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.616831 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.616919 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/947409ad-389f-4bae-bb47-021f76688c59-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:05 crc kubenswrapper[4746]: I1211 11:03:05.616944 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppvm2\" (UniqueName: \"kubernetes.io/projected/947409ad-389f-4bae-bb47-021f76688c59-kube-api-access-ppvm2\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:06 crc kubenswrapper[4746]: I1211 11:03:06.278113 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gwpfm" Dec 11 11:03:06 crc kubenswrapper[4746]: I1211 11:03:06.308905 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gwpfm"] Dec 11 11:03:06 crc kubenswrapper[4746]: I1211 11:03:06.319461 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gwpfm"] Dec 11 11:03:07 crc kubenswrapper[4746]: I1211 11:03:07.641570 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="947409ad-389f-4bae-bb47-021f76688c59" path="/var/lib/kubelet/pods/947409ad-389f-4bae-bb47-021f76688c59/volumes" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.066613 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wjmd5"] Dec 11 11:03:20 crc kubenswrapper[4746]: E1211 11:03:20.068125 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947409ad-389f-4bae-bb47-021f76688c59" containerName="extract-utilities" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.068146 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="947409ad-389f-4bae-bb47-021f76688c59" containerName="extract-utilities" Dec 11 11:03:20 crc kubenswrapper[4746]: E1211 11:03:20.068184 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947409ad-389f-4bae-bb47-021f76688c59" containerName="registry-server" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.068192 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="947409ad-389f-4bae-bb47-021f76688c59" containerName="registry-server" Dec 11 11:03:20 crc kubenswrapper[4746]: E1211 11:03:20.068208 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="947409ad-389f-4bae-bb47-021f76688c59" containerName="extract-content" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.068214 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="947409ad-389f-4bae-bb47-021f76688c59" containerName="extract-content" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.068503 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="947409ad-389f-4bae-bb47-021f76688c59" containerName="registry-server" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.070350 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.109207 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wjmd5"] Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.205038 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw6q7\" (UniqueName: \"kubernetes.io/projected/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-kube-api-access-tw6q7\") pod \"certified-operators-wjmd5\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.205215 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-catalog-content\") pod \"certified-operators-wjmd5\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.205294 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-utilities\") pod \"certified-operators-wjmd5\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.308420 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-catalog-content\") pod \"certified-operators-wjmd5\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.309037 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-utilities\") pod \"certified-operators-wjmd5\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.309195 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw6q7\" (UniqueName: \"kubernetes.io/projected/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-kube-api-access-tw6q7\") pod \"certified-operators-wjmd5\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.309310 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-catalog-content\") pod \"certified-operators-wjmd5\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.309671 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-utilities\") pod \"certified-operators-wjmd5\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.335485 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw6q7\" (UniqueName: \"kubernetes.io/projected/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-kube-api-access-tw6q7\") pod \"certified-operators-wjmd5\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.404388 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:20 crc kubenswrapper[4746]: I1211 11:03:20.955027 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wjmd5"] Dec 11 11:03:20 crc kubenswrapper[4746]: W1211 11:03:20.966021 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod417bcb0e_7a7f_4d0f_804f_ca3a8fac6f7b.slice/crio-af8b47635867498d0d0c8bb02d7cf61b3a43d58fe7d2a8719a5b313262c2f76f WatchSource:0}: Error finding container af8b47635867498d0d0c8bb02d7cf61b3a43d58fe7d2a8719a5b313262c2f76f: Status 404 returned error can't find the container with id af8b47635867498d0d0c8bb02d7cf61b3a43d58fe7d2a8719a5b313262c2f76f Dec 11 11:03:21 crc kubenswrapper[4746]: I1211 11:03:21.425390 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmd5" event={"ID":"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b","Type":"ContainerStarted","Data":"0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022"} Dec 11 11:03:21 crc kubenswrapper[4746]: I1211 11:03:21.425989 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmd5" event={"ID":"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b","Type":"ContainerStarted","Data":"af8b47635867498d0d0c8bb02d7cf61b3a43d58fe7d2a8719a5b313262c2f76f"} Dec 11 11:03:22 crc kubenswrapper[4746]: I1211 11:03:22.438341 4746 generic.go:334] "Generic (PLEG): container finished" podID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerID="0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022" exitCode=0 Dec 11 11:03:22 crc kubenswrapper[4746]: I1211 11:03:22.438404 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmd5" event={"ID":"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b","Type":"ContainerDied","Data":"0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022"} Dec 11 11:03:24 crc kubenswrapper[4746]: I1211 11:03:24.459880 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmd5" event={"ID":"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b","Type":"ContainerStarted","Data":"62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9"} Dec 11 11:03:25 crc kubenswrapper[4746]: I1211 11:03:25.471770 4746 generic.go:334] "Generic (PLEG): container finished" podID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerID="62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9" exitCode=0 Dec 11 11:03:25 crc kubenswrapper[4746]: I1211 11:03:25.471895 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmd5" event={"ID":"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b","Type":"ContainerDied","Data":"62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9"} Dec 11 11:03:26 crc kubenswrapper[4746]: I1211 11:03:26.486124 4746 generic.go:334] "Generic (PLEG): container finished" podID="185ea57e-f9c3-4d13-9501-424880b8fe08" containerID="bbf24cf6e0c366d7967761a25966b284483010a1b5e160fffec2db684e732693" exitCode=0 Dec 11 11:03:26 crc kubenswrapper[4746]: I1211 11:03:26.486206 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/crc-debug-nj695" event={"ID":"185ea57e-f9c3-4d13-9501-424880b8fe08","Type":"ContainerDied","Data":"bbf24cf6e0c366d7967761a25966b284483010a1b5e160fffec2db684e732693"} Dec 11 11:03:26 crc kubenswrapper[4746]: I1211 11:03:26.490370 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmd5" event={"ID":"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b","Type":"ContainerStarted","Data":"63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e"} Dec 11 11:03:26 crc kubenswrapper[4746]: I1211 11:03:26.538177 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wjmd5" podStartSLOduration=2.805710654 podStartE2EDuration="6.538156729s" podCreationTimestamp="2025-12-11 11:03:20 +0000 UTC" firstStartedPulling="2025-12-11 11:03:22.441146705 +0000 UTC m=+4175.301010018" lastFinishedPulling="2025-12-11 11:03:26.17359278 +0000 UTC m=+4179.033456093" observedRunningTime="2025-12-11 11:03:26.531568381 +0000 UTC m=+4179.391431704" watchObservedRunningTime="2025-12-11 11:03:26.538156729 +0000 UTC m=+4179.398020042" Dec 11 11:03:27 crc kubenswrapper[4746]: I1211 11:03:27.940329 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-nj695" Dec 11 11:03:27 crc kubenswrapper[4746]: I1211 11:03:27.979981 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-76nst/crc-debug-nj695"] Dec 11 11:03:27 crc kubenswrapper[4746]: I1211 11:03:27.989975 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-76nst/crc-debug-nj695"] Dec 11 11:03:28 crc kubenswrapper[4746]: I1211 11:03:28.092609 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/185ea57e-f9c3-4d13-9501-424880b8fe08-host\") pod \"185ea57e-f9c3-4d13-9501-424880b8fe08\" (UID: \"185ea57e-f9c3-4d13-9501-424880b8fe08\") " Dec 11 11:03:28 crc kubenswrapper[4746]: I1211 11:03:28.093109 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wr4qp\" (UniqueName: \"kubernetes.io/projected/185ea57e-f9c3-4d13-9501-424880b8fe08-kube-api-access-wr4qp\") pod \"185ea57e-f9c3-4d13-9501-424880b8fe08\" (UID: \"185ea57e-f9c3-4d13-9501-424880b8fe08\") " Dec 11 11:03:28 crc kubenswrapper[4746]: I1211 11:03:28.094619 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/185ea57e-f9c3-4d13-9501-424880b8fe08-host" (OuterVolumeSpecName: "host") pod "185ea57e-f9c3-4d13-9501-424880b8fe08" (UID: "185ea57e-f9c3-4d13-9501-424880b8fe08"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 11:03:28 crc kubenswrapper[4746]: I1211 11:03:28.103358 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/185ea57e-f9c3-4d13-9501-424880b8fe08-kube-api-access-wr4qp" (OuterVolumeSpecName: "kube-api-access-wr4qp") pod "185ea57e-f9c3-4d13-9501-424880b8fe08" (UID: "185ea57e-f9c3-4d13-9501-424880b8fe08"). InnerVolumeSpecName "kube-api-access-wr4qp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:03:28 crc kubenswrapper[4746]: I1211 11:03:28.196132 4746 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/185ea57e-f9c3-4d13-9501-424880b8fe08-host\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:28 crc kubenswrapper[4746]: I1211 11:03:28.196508 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wr4qp\" (UniqueName: \"kubernetes.io/projected/185ea57e-f9c3-4d13-9501-424880b8fe08-kube-api-access-wr4qp\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:28 crc kubenswrapper[4746]: I1211 11:03:28.510106 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdbc7ac512039740a34bad79687810d1d9574865e97a8676772cf403add63cf4" Dec 11 11:03:28 crc kubenswrapper[4746]: I1211 11:03:28.510228 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-nj695" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.187185 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-76nst/crc-debug-ssrqw"] Dec 11 11:03:29 crc kubenswrapper[4746]: E1211 11:03:29.187645 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="185ea57e-f9c3-4d13-9501-424880b8fe08" containerName="container-00" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.187658 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="185ea57e-f9c3-4d13-9501-424880b8fe08" containerName="container-00" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.187839 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="185ea57e-f9c3-4d13-9501-424880b8fe08" containerName="container-00" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.188483 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-ssrqw" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.190280 4746 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-76nst"/"default-dockercfg-c9f48" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.318329 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg6sj\" (UniqueName: \"kubernetes.io/projected/272df024-40d2-43aa-a1cc-bfe940d2642f-kube-api-access-xg6sj\") pod \"crc-debug-ssrqw\" (UID: \"272df024-40d2-43aa-a1cc-bfe940d2642f\") " pod="openshift-must-gather-76nst/crc-debug-ssrqw" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.318728 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/272df024-40d2-43aa-a1cc-bfe940d2642f-host\") pod \"crc-debug-ssrqw\" (UID: \"272df024-40d2-43aa-a1cc-bfe940d2642f\") " pod="openshift-must-gather-76nst/crc-debug-ssrqw" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.420967 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg6sj\" (UniqueName: \"kubernetes.io/projected/272df024-40d2-43aa-a1cc-bfe940d2642f-kube-api-access-xg6sj\") pod \"crc-debug-ssrqw\" (UID: \"272df024-40d2-43aa-a1cc-bfe940d2642f\") " pod="openshift-must-gather-76nst/crc-debug-ssrqw" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.421071 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/272df024-40d2-43aa-a1cc-bfe940d2642f-host\") pod \"crc-debug-ssrqw\" (UID: \"272df024-40d2-43aa-a1cc-bfe940d2642f\") " pod="openshift-must-gather-76nst/crc-debug-ssrqw" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.421203 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/272df024-40d2-43aa-a1cc-bfe940d2642f-host\") pod \"crc-debug-ssrqw\" (UID: \"272df024-40d2-43aa-a1cc-bfe940d2642f\") " pod="openshift-must-gather-76nst/crc-debug-ssrqw" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.450381 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg6sj\" (UniqueName: \"kubernetes.io/projected/272df024-40d2-43aa-a1cc-bfe940d2642f-kube-api-access-xg6sj\") pod \"crc-debug-ssrqw\" (UID: \"272df024-40d2-43aa-a1cc-bfe940d2642f\") " pod="openshift-must-gather-76nst/crc-debug-ssrqw" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.509703 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-ssrqw" Dec 11 11:03:29 crc kubenswrapper[4746]: I1211 11:03:29.643149 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="185ea57e-f9c3-4d13-9501-424880b8fe08" path="/var/lib/kubelet/pods/185ea57e-f9c3-4d13-9501-424880b8fe08/volumes" Dec 11 11:03:30 crc kubenswrapper[4746]: I1211 11:03:30.404827 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:30 crc kubenswrapper[4746]: I1211 11:03:30.405446 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:30 crc kubenswrapper[4746]: I1211 11:03:30.485779 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:30 crc kubenswrapper[4746]: I1211 11:03:30.541599 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/crc-debug-ssrqw" event={"ID":"272df024-40d2-43aa-a1cc-bfe940d2642f","Type":"ContainerStarted","Data":"590eeecc2bb17e17aa846b52b3357afa7eb0cb8feeaa54067fe647f5e1992d37"} Dec 11 11:03:31 crc kubenswrapper[4746]: I1211 11:03:31.556710 4746 generic.go:334] "Generic (PLEG): container finished" podID="272df024-40d2-43aa-a1cc-bfe940d2642f" containerID="2ab22580af35472a96fbf5e07c3f43487f4d709f11d8ec29c47d2e3a8f2c6886" exitCode=0 Dec 11 11:03:31 crc kubenswrapper[4746]: I1211 11:03:31.556953 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/crc-debug-ssrqw" event={"ID":"272df024-40d2-43aa-a1cc-bfe940d2642f","Type":"ContainerDied","Data":"2ab22580af35472a96fbf5e07c3f43487f4d709f11d8ec29c47d2e3a8f2c6886"} Dec 11 11:03:32 crc kubenswrapper[4746]: I1211 11:03:32.132836 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-76nst/crc-debug-ssrqw"] Dec 11 11:03:32 crc kubenswrapper[4746]: I1211 11:03:32.144358 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-76nst/crc-debug-ssrqw"] Dec 11 11:03:32 crc kubenswrapper[4746]: I1211 11:03:32.713919 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-ssrqw" Dec 11 11:03:32 crc kubenswrapper[4746]: I1211 11:03:32.809341 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg6sj\" (UniqueName: \"kubernetes.io/projected/272df024-40d2-43aa-a1cc-bfe940d2642f-kube-api-access-xg6sj\") pod \"272df024-40d2-43aa-a1cc-bfe940d2642f\" (UID: \"272df024-40d2-43aa-a1cc-bfe940d2642f\") " Dec 11 11:03:32 crc kubenswrapper[4746]: I1211 11:03:32.809448 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/272df024-40d2-43aa-a1cc-bfe940d2642f-host\") pod \"272df024-40d2-43aa-a1cc-bfe940d2642f\" (UID: \"272df024-40d2-43aa-a1cc-bfe940d2642f\") " Dec 11 11:03:32 crc kubenswrapper[4746]: I1211 11:03:32.809532 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/272df024-40d2-43aa-a1cc-bfe940d2642f-host" (OuterVolumeSpecName: "host") pod "272df024-40d2-43aa-a1cc-bfe940d2642f" (UID: "272df024-40d2-43aa-a1cc-bfe940d2642f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 11:03:32 crc kubenswrapper[4746]: I1211 11:03:32.809987 4746 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/272df024-40d2-43aa-a1cc-bfe940d2642f-host\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:32 crc kubenswrapper[4746]: I1211 11:03:32.816903 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/272df024-40d2-43aa-a1cc-bfe940d2642f-kube-api-access-xg6sj" (OuterVolumeSpecName: "kube-api-access-xg6sj") pod "272df024-40d2-43aa-a1cc-bfe940d2642f" (UID: "272df024-40d2-43aa-a1cc-bfe940d2642f"). InnerVolumeSpecName "kube-api-access-xg6sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:03:32 crc kubenswrapper[4746]: I1211 11:03:32.911839 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg6sj\" (UniqueName: \"kubernetes.io/projected/272df024-40d2-43aa-a1cc-bfe940d2642f-kube-api-access-xg6sj\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.347899 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-76nst/crc-debug-pwjp8"] Dec 11 11:03:33 crc kubenswrapper[4746]: E1211 11:03:33.348944 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="272df024-40d2-43aa-a1cc-bfe940d2642f" containerName="container-00" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.348970 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="272df024-40d2-43aa-a1cc-bfe940d2642f" containerName="container-00" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.349294 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="272df024-40d2-43aa-a1cc-bfe940d2642f" containerName="container-00" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.350369 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-pwjp8" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.527534 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-host\") pod \"crc-debug-pwjp8\" (UID: \"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c\") " pod="openshift-must-gather-76nst/crc-debug-pwjp8" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.527636 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbgs\" (UniqueName: \"kubernetes.io/projected/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-kube-api-access-qtbgs\") pod \"crc-debug-pwjp8\" (UID: \"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c\") " pod="openshift-must-gather-76nst/crc-debug-pwjp8" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.578668 4746 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="590eeecc2bb17e17aa846b52b3357afa7eb0cb8feeaa54067fe647f5e1992d37" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.578764 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-ssrqw" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.630207 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-host\") pod \"crc-debug-pwjp8\" (UID: \"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c\") " pod="openshift-must-gather-76nst/crc-debug-pwjp8" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.630350 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbgs\" (UniqueName: \"kubernetes.io/projected/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-kube-api-access-qtbgs\") pod \"crc-debug-pwjp8\" (UID: \"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c\") " pod="openshift-must-gather-76nst/crc-debug-pwjp8" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.630447 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-host\") pod \"crc-debug-pwjp8\" (UID: \"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c\") " pod="openshift-must-gather-76nst/crc-debug-pwjp8" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.648954 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="272df024-40d2-43aa-a1cc-bfe940d2642f" path="/var/lib/kubelet/pods/272df024-40d2-43aa-a1cc-bfe940d2642f/volumes" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.654445 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbgs\" (UniqueName: \"kubernetes.io/projected/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-kube-api-access-qtbgs\") pod \"crc-debug-pwjp8\" (UID: \"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c\") " pod="openshift-must-gather-76nst/crc-debug-pwjp8" Dec 11 11:03:33 crc kubenswrapper[4746]: I1211 11:03:33.669545 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-pwjp8" Dec 11 11:03:33 crc kubenswrapper[4746]: W1211 11:03:33.720276 4746 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7bf83e07_e5a0_4eb6_accd_3dd74442bb9c.slice/crio-a824770e8d0ec85ac36c85cc926dd96dfdeb7aa8387e58b75ae56ac217f1d6c9 WatchSource:0}: Error finding container a824770e8d0ec85ac36c85cc926dd96dfdeb7aa8387e58b75ae56ac217f1d6c9: Status 404 returned error can't find the container with id a824770e8d0ec85ac36c85cc926dd96dfdeb7aa8387e58b75ae56ac217f1d6c9 Dec 11 11:03:34 crc kubenswrapper[4746]: I1211 11:03:34.590495 4746 generic.go:334] "Generic (PLEG): container finished" podID="7bf83e07-e5a0-4eb6-accd-3dd74442bb9c" containerID="4d3aa9d36df0e371097f7b3b59fa87fea4655e4ede9b492e949f28dd5f3e1c24" exitCode=0 Dec 11 11:03:34 crc kubenswrapper[4746]: I1211 11:03:34.590602 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/crc-debug-pwjp8" event={"ID":"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c","Type":"ContainerDied","Data":"4d3aa9d36df0e371097f7b3b59fa87fea4655e4ede9b492e949f28dd5f3e1c24"} Dec 11 11:03:34 crc kubenswrapper[4746]: I1211 11:03:34.590854 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/crc-debug-pwjp8" event={"ID":"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c","Type":"ContainerStarted","Data":"a824770e8d0ec85ac36c85cc926dd96dfdeb7aa8387e58b75ae56ac217f1d6c9"} Dec 11 11:03:34 crc kubenswrapper[4746]: I1211 11:03:34.638681 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-76nst/crc-debug-pwjp8"] Dec 11 11:03:34 crc kubenswrapper[4746]: I1211 11:03:34.652127 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-76nst/crc-debug-pwjp8"] Dec 11 11:03:35 crc kubenswrapper[4746]: I1211 11:03:35.707772 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-pwjp8" Dec 11 11:03:35 crc kubenswrapper[4746]: I1211 11:03:35.876665 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtbgs\" (UniqueName: \"kubernetes.io/projected/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-kube-api-access-qtbgs\") pod \"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c\" (UID: \"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c\") " Dec 11 11:03:35 crc kubenswrapper[4746]: I1211 11:03:35.877300 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-host\") pod \"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c\" (UID: \"7bf83e07-e5a0-4eb6-accd-3dd74442bb9c\") " Dec 11 11:03:35 crc kubenswrapper[4746]: I1211 11:03:35.877535 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-host" (OuterVolumeSpecName: "host") pod "7bf83e07-e5a0-4eb6-accd-3dd74442bb9c" (UID: "7bf83e07-e5a0-4eb6-accd-3dd74442bb9c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 11 11:03:35 crc kubenswrapper[4746]: I1211 11:03:35.878536 4746 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-host\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:35 crc kubenswrapper[4746]: I1211 11:03:35.884369 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-kube-api-access-qtbgs" (OuterVolumeSpecName: "kube-api-access-qtbgs") pod "7bf83e07-e5a0-4eb6-accd-3dd74442bb9c" (UID: "7bf83e07-e5a0-4eb6-accd-3dd74442bb9c"). InnerVolumeSpecName "kube-api-access-qtbgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:03:35 crc kubenswrapper[4746]: I1211 11:03:35.980177 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qtbgs\" (UniqueName: \"kubernetes.io/projected/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c-kube-api-access-qtbgs\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:36 crc kubenswrapper[4746]: I1211 11:03:36.611236 4746 scope.go:117] "RemoveContainer" containerID="4d3aa9d36df0e371097f7b3b59fa87fea4655e4ede9b492e949f28dd5f3e1c24" Dec 11 11:03:36 crc kubenswrapper[4746]: I1211 11:03:36.611291 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/crc-debug-pwjp8" Dec 11 11:03:37 crc kubenswrapper[4746]: I1211 11:03:37.642074 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bf83e07-e5a0-4eb6-accd-3dd74442bb9c" path="/var/lib/kubelet/pods/7bf83e07-e5a0-4eb6-accd-3dd74442bb9c/volumes" Dec 11 11:03:40 crc kubenswrapper[4746]: I1211 11:03:40.453819 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:40 crc kubenswrapper[4746]: I1211 11:03:40.510472 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wjmd5"] Dec 11 11:03:40 crc kubenswrapper[4746]: I1211 11:03:40.651291 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wjmd5" podUID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerName="registry-server" containerID="cri-o://63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e" gracePeriod=2 Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.140711 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.179686 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-utilities\") pod \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.179756 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-catalog-content\") pod \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.179926 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw6q7\" (UniqueName: \"kubernetes.io/projected/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-kube-api-access-tw6q7\") pod \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\" (UID: \"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b\") " Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.180420 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-utilities" (OuterVolumeSpecName: "utilities") pod "417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" (UID: "417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.180640 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.187266 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-kube-api-access-tw6q7" (OuterVolumeSpecName: "kube-api-access-tw6q7") pod "417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" (UID: "417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b"). InnerVolumeSpecName "kube-api-access-tw6q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.239868 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" (UID: "417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.282615 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.282652 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tw6q7\" (UniqueName: \"kubernetes.io/projected/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b-kube-api-access-tw6q7\") on node \"crc\" DevicePath \"\"" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.665102 4746 generic.go:334] "Generic (PLEG): container finished" podID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerID="63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e" exitCode=0 Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.665154 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmd5" event={"ID":"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b","Type":"ContainerDied","Data":"63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e"} Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.665166 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wjmd5" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.665190 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wjmd5" event={"ID":"417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b","Type":"ContainerDied","Data":"af8b47635867498d0d0c8bb02d7cf61b3a43d58fe7d2a8719a5b313262c2f76f"} Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.665208 4746 scope.go:117] "RemoveContainer" containerID="63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.692914 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wjmd5"] Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.695263 4746 scope.go:117] "RemoveContainer" containerID="62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.703568 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wjmd5"] Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.740443 4746 scope.go:117] "RemoveContainer" containerID="0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.781442 4746 scope.go:117] "RemoveContainer" containerID="63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e" Dec 11 11:03:41 crc kubenswrapper[4746]: E1211 11:03:41.782362 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e\": container with ID starting with 63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e not found: ID does not exist" containerID="63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.782405 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e"} err="failed to get container status \"63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e\": rpc error: code = NotFound desc = could not find container \"63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e\": container with ID starting with 63701e16b5a858de61dde8e8f34c079f7fa0979403eeaa1c91a0d4a3216a392e not found: ID does not exist" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.782463 4746 scope.go:117] "RemoveContainer" containerID="62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9" Dec 11 11:03:41 crc kubenswrapper[4746]: E1211 11:03:41.784082 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9\": container with ID starting with 62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9 not found: ID does not exist" containerID="62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.784108 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9"} err="failed to get container status \"62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9\": rpc error: code = NotFound desc = could not find container \"62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9\": container with ID starting with 62820c50d2cab6ecae4cefd4d459dcd225e31480e4a1c6bc218af7d8868560d9 not found: ID does not exist" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.784123 4746 scope.go:117] "RemoveContainer" containerID="0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022" Dec 11 11:03:41 crc kubenswrapper[4746]: E1211 11:03:41.785930 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022\": container with ID starting with 0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022 not found: ID does not exist" containerID="0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022" Dec 11 11:03:41 crc kubenswrapper[4746]: I1211 11:03:41.785955 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022"} err="failed to get container status \"0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022\": rpc error: code = NotFound desc = could not find container \"0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022\": container with ID starting with 0b265167300d318428f15be256e0aeb13a14289458db34899bfbe530080fb022 not found: ID does not exist" Dec 11 11:03:43 crc kubenswrapper[4746]: I1211 11:03:43.644021 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" path="/var/lib/kubelet/pods/417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b/volumes" Dec 11 11:04:00 crc kubenswrapper[4746]: I1211 11:04:00.369197 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-d68478596-8jx82_80265cad-1b6f-4dfc-aee2-04a1da6152fc/barbican-api/0.log" Dec 11 11:04:00 crc kubenswrapper[4746]: I1211 11:04:00.516527 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-d68478596-8jx82_80265cad-1b6f-4dfc-aee2-04a1da6152fc/barbican-api-log/0.log" Dec 11 11:04:00 crc kubenswrapper[4746]: I1211 11:04:00.586338 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d4579cd86-47qwg_6346d2a5-4279-407e-981e-423993612a5c/barbican-keystone-listener-log/0.log" Dec 11 11:04:00 crc kubenswrapper[4746]: I1211 11:04:00.617563 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5d4579cd86-47qwg_6346d2a5-4279-407e-981e-423993612a5c/barbican-keystone-listener/0.log" Dec 11 11:04:00 crc kubenswrapper[4746]: I1211 11:04:00.753528 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55c6884c59-pb7cb_17f2e937-e45d-48e4-be34-f013cb61dc7e/barbican-worker/0.log" Dec 11 11:04:00 crc kubenswrapper[4746]: I1211 11:04:00.796081 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-55c6884c59-pb7cb_17f2e937-e45d-48e4-be34-f013cb61dc7e/barbican-worker-log/0.log" Dec 11 11:04:00 crc kubenswrapper[4746]: I1211 11:04:00.961166 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-spbpj_3de7e541-4120-4c78-866b-9991eb4d1810/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.023982 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad/ceilometer-central-agent/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.068776 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad/ceilometer-notification-agent/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.184911 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad/proxy-httpd/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.219567 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_7dd3f2ea-120c-46e1-af15-5dd8e8c3b1ad/sg-core/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.362981 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_93766680-5fd5-4cc4-9ab8-128daeec573d/cinder-api/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.449388 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_93766680-5fd5-4cc4-9ab8-128daeec573d/cinder-api-log/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.639958 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_663cb4b5-0c8f-4518-9ba3-1d34e8b1949a/probe/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.661476 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_663cb4b5-0c8f-4518-9ba3-1d34e8b1949a/cinder-scheduler/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.738999 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-jcnjm_f6fdd767-cd5e-4858-9c19-ebc73fd789d4/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.855723 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-rf7p2_797c27c3-e8d6-4324-926e-e3b859e05b51/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:01 crc kubenswrapper[4746]: I1211 11:04:01.991385 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-wfn8b_0a51bb6a-6ca0-4e2d-8427-70e92cd4730d/init/0.log" Dec 11 11:04:02 crc kubenswrapper[4746]: I1211 11:04:02.192449 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-wfn8b_0a51bb6a-6ca0-4e2d-8427-70e92cd4730d/init/0.log" Dec 11 11:04:02 crc kubenswrapper[4746]: I1211 11:04:02.229307 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-78c64bc9c5-wfn8b_0a51bb6a-6ca0-4e2d-8427-70e92cd4730d/dnsmasq-dns/0.log" Dec 11 11:04:02 crc kubenswrapper[4746]: I1211 11:04:02.267085 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-9fnmm_87d58bd7-f602-4d6b-b16c-1178233ebe3f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:02 crc kubenswrapper[4746]: I1211 11:04:02.433921 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_db60fce8-8218-4af0-84db-6bbbe7218d4f/glance-httpd/0.log" Dec 11 11:04:02 crc kubenswrapper[4746]: I1211 11:04:02.474308 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_db60fce8-8218-4af0-84db-6bbbe7218d4f/glance-log/0.log" Dec 11 11:04:02 crc kubenswrapper[4746]: I1211 11:04:02.627356 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_805456fb-d8e0-4341-b5ad-93906c3ad0e5/glance-httpd/0.log" Dec 11 11:04:02 crc kubenswrapper[4746]: I1211 11:04:02.632724 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_805456fb-d8e0-4341-b5ad-93906c3ad0e5/glance-log/0.log" Dec 11 11:04:02 crc kubenswrapper[4746]: I1211 11:04:02.829738 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b7f654f86-sh94c_b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81/horizon/0.log" Dec 11 11:04:02 crc kubenswrapper[4746]: I1211 11:04:02.992571 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-vh5hc_a9076913-deed-4328-8c4b-147c3f7bac9a/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:03 crc kubenswrapper[4746]: I1211 11:04:03.234575 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-94sts_9a09e7c3-6aed-4155-bbe9-7be9b885cd57/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:03 crc kubenswrapper[4746]: I1211 11:04:03.402636 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-b7f654f86-sh94c_b5fc9dd4-aca9-44c7-b8d4-cbbc19f05e81/horizon-log/0.log" Dec 11 11:04:03 crc kubenswrapper[4746]: I1211 11:04:03.595760 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-65c49f59b-9mqvh_689e0dd9-7055-4ca2-81b3-c66d9850e166/keystone-api/0.log" Dec 11 11:04:03 crc kubenswrapper[4746]: I1211 11:04:03.599458 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29424181-sklrt_47208b16-9400-4363-97f2-fd70600a8430/keystone-cron/0.log" Dec 11 11:04:03 crc kubenswrapper[4746]: I1211 11:04:03.789808 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_a12b0580-9910-43bc-ac49-bbb03f54211b/kube-state-metrics/0.log" Dec 11 11:04:03 crc kubenswrapper[4746]: I1211 11:04:03.891505 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-85b2t_1424dbeb-f9d9-48d1-8b92-14828c8ea326/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:04 crc kubenswrapper[4746]: I1211 11:04:04.257801 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7754896b7c-5hf99_217bbeb1-db62-4c24-82de-be79c9bad92b/neutron-api/0.log" Dec 11 11:04:04 crc kubenswrapper[4746]: I1211 11:04:04.358019 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-8bjgw_e1498cd2-84fe-4769-8fc5-ffe9f8e32251/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:04 crc kubenswrapper[4746]: I1211 11:04:04.378737 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-7754896b7c-5hf99_217bbeb1-db62-4c24-82de-be79c9bad92b/neutron-httpd/0.log" Dec 11 11:04:05 crc kubenswrapper[4746]: I1211 11:04:05.005323 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_015a8233-ebde-4703-a8bb-81267822daaa/nova-api-log/0.log" Dec 11 11:04:05 crc kubenswrapper[4746]: I1211 11:04:05.021483 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5af9e7da-bc61-40ee-8c58-9f2201d12884/nova-cell0-conductor-conductor/0.log" Dec 11 11:04:05 crc kubenswrapper[4746]: I1211 11:04:05.249461 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_3d3d3996-336f-4ca7-a8eb-16a243b55115/nova-cell1-conductor-conductor/0.log" Dec 11 11:04:05 crc kubenswrapper[4746]: I1211 11:04:05.381626 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_12f63c9f-c2d2-45a1-99b7-ee148b220e4d/nova-cell1-novncproxy-novncproxy/0.log" Dec 11 11:04:05 crc kubenswrapper[4746]: I1211 11:04:05.433667 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_015a8233-ebde-4703-a8bb-81267822daaa/nova-api-api/0.log" Dec 11 11:04:05 crc kubenswrapper[4746]: I1211 11:04:05.543259 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-jsx69_731d2759-47a3-4e5e-a753-e2cb1cb7c982/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:05 crc kubenswrapper[4746]: I1211 11:04:05.734309 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3dcbca6a-41cb-489b-9632-00e734e2c95b/nova-metadata-log/0.log" Dec 11 11:04:06 crc kubenswrapper[4746]: I1211 11:04:06.070935 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5170f24-7cb7-43d5-bacc-c8224cfabcf4/mysql-bootstrap/0.log" Dec 11 11:04:06 crc kubenswrapper[4746]: I1211 11:04:06.162598 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_5486ad2e-b2db-4967-8308-592b79065f54/nova-scheduler-scheduler/0.log" Dec 11 11:04:06 crc kubenswrapper[4746]: I1211 11:04:06.276579 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5170f24-7cb7-43d5-bacc-c8224cfabcf4/galera/0.log" Dec 11 11:04:06 crc kubenswrapper[4746]: I1211 11:04:06.332768 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_f5170f24-7cb7-43d5-bacc-c8224cfabcf4/mysql-bootstrap/0.log" Dec 11 11:04:06 crc kubenswrapper[4746]: I1211 11:04:06.522251 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f35f21ce-59cb-4ee0-850c-9aba4010c890/mysql-bootstrap/0.log" Dec 11 11:04:06 crc kubenswrapper[4746]: I1211 11:04:06.765218 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f35f21ce-59cb-4ee0-850c-9aba4010c890/mysql-bootstrap/0.log" Dec 11 11:04:06 crc kubenswrapper[4746]: I1211 11:04:06.794887 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_f35f21ce-59cb-4ee0-850c-9aba4010c890/galera/0.log" Dec 11 11:04:06 crc kubenswrapper[4746]: I1211 11:04:06.969680 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_f39575aa-fcfa-42ad-aceb-a8611602030f/openstackclient/0.log" Dec 11 11:04:07 crc kubenswrapper[4746]: I1211 11:04:07.058740 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-242vs_31760b52-7caf-49dd-bf1e-2d2f88b000a2/ovn-controller/0.log" Dec 11 11:04:07 crc kubenswrapper[4746]: I1211 11:04:07.442329 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_3dcbca6a-41cb-489b-9632-00e734e2c95b/nova-metadata-metadata/0.log" Dec 11 11:04:07 crc kubenswrapper[4746]: I1211 11:04:07.801845 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-jklpb_bc3dc4dd-014a-42fe-a1e7-ee2d10866d75/openstack-network-exporter/0.log" Dec 11 11:04:07 crc kubenswrapper[4746]: I1211 11:04:07.852858 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5jlj_fbd694c4-2e54-4535-a357-0fb7ffdcabdb/ovsdb-server-init/0.log" Dec 11 11:04:08 crc kubenswrapper[4746]: I1211 11:04:08.143294 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5jlj_fbd694c4-2e54-4535-a357-0fb7ffdcabdb/ovs-vswitchd/0.log" Dec 11 11:04:08 crc kubenswrapper[4746]: I1211 11:04:08.169155 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5jlj_fbd694c4-2e54-4535-a357-0fb7ffdcabdb/ovsdb-server-init/0.log" Dec 11 11:04:08 crc kubenswrapper[4746]: I1211 11:04:08.224269 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w5jlj_fbd694c4-2e54-4535-a357-0fb7ffdcabdb/ovsdb-server/0.log" Dec 11 11:04:08 crc kubenswrapper[4746]: I1211 11:04:08.419089 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_01f4f65b-37fc-4500-a9ba-ba3a717c37bb/ovn-northd/0.log" Dec 11 11:04:08 crc kubenswrapper[4746]: I1211 11:04:08.419743 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_01f4f65b-37fc-4500-a9ba-ba3a717c37bb/openstack-network-exporter/0.log" Dec 11 11:04:08 crc kubenswrapper[4746]: I1211 11:04:08.441335 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-tn2d9_a42d14d6-9ea4-4ff3-b72b-78bfb51c14a8/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:08 crc kubenswrapper[4746]: I1211 11:04:08.615311 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_44843435-5bdd-416c-af49-abc0ce7c6c03/openstack-network-exporter/0.log" Dec 11 11:04:08 crc kubenswrapper[4746]: I1211 11:04:08.724269 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_44843435-5bdd-416c-af49-abc0ce7c6c03/ovsdbserver-nb/0.log" Dec 11 11:04:08 crc kubenswrapper[4746]: I1211 11:04:08.941434 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f17050a1-f53d-4058-9b22-1d26754f13d0/ovsdbserver-sb/0.log" Dec 11 11:04:08 crc kubenswrapper[4746]: I1211 11:04:08.949443 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f17050a1-f53d-4058-9b22-1d26754f13d0/openstack-network-exporter/0.log" Dec 11 11:04:09 crc kubenswrapper[4746]: I1211 11:04:09.063538 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bb679f888-dj844_621d56dd-8011-4236-a393-6b57891b3f37/placement-api/0.log" Dec 11 11:04:09 crc kubenswrapper[4746]: I1211 11:04:09.504178 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-bb679f888-dj844_621d56dd-8011-4236-a393-6b57891b3f37/placement-log/0.log" Dec 11 11:04:09 crc kubenswrapper[4746]: I1211 11:04:09.517287 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e96da7ab-1e2a-4f9c-bb48-9955198a646a/setup-container/0.log" Dec 11 11:04:10 crc kubenswrapper[4746]: I1211 11:04:10.109746 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9c61eb65-bc9f-4b9f-84c8-286e25295809/setup-container/0.log" Dec 11 11:04:10 crc kubenswrapper[4746]: I1211 11:04:10.189156 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e96da7ab-1e2a-4f9c-bb48-9955198a646a/setup-container/0.log" Dec 11 11:04:10 crc kubenswrapper[4746]: I1211 11:04:10.244400 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e96da7ab-1e2a-4f9c-bb48-9955198a646a/rabbitmq/0.log" Dec 11 11:04:10 crc kubenswrapper[4746]: I1211 11:04:10.439125 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9c61eb65-bc9f-4b9f-84c8-286e25295809/rabbitmq/0.log" Dec 11 11:04:10 crc kubenswrapper[4746]: I1211 11:04:10.440085 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9c61eb65-bc9f-4b9f-84c8-286e25295809/setup-container/0.log" Dec 11 11:04:10 crc kubenswrapper[4746]: I1211 11:04:10.500486 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-ftqhw_a9fa6020-1f64-4cc4-8b95-7372a5ce6f92/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:10 crc kubenswrapper[4746]: I1211 11:04:10.654191 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-9j64g_e9138713-a26f-45a2-8222-3bb43892a757/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:10 crc kubenswrapper[4746]: I1211 11:04:10.722996 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-v4k9h_30f79518-b92a-4058-8834-45c45c284eee/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:10 crc kubenswrapper[4746]: I1211 11:04:10.933772 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-5vqjm_6da9c642-e03d-463d-a3f1-c74bb27843c2/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.118637 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-5c2w5_ce601a12-7347-4ddd-8ab4-5ec9f3f45b5c/ssh-known-hosts-edpm-deployment/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.223158 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7cc45cfb45-bbbq8_5c65e9de-7890-47aa-bcf7-48cdfd6dd262/proxy-server/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.395779 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7cc45cfb45-bbbq8_5c65e9de-7890-47aa-bcf7-48cdfd6dd262/proxy-httpd/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.454015 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wrnch_94f9d09a-c638-4da1-a6e0-3337621da894/swift-ring-rebalance/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.584303 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/account-reaper/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.652789 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/account-auditor/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.715151 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/account-replicator/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.760864 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/account-server/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.842030 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/container-auditor/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.912594 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/container-replicator/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.950038 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/container-server/0.log" Dec 11 11:04:11 crc kubenswrapper[4746]: I1211 11:04:11.997256 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/container-updater/0.log" Dec 11 11:04:12 crc kubenswrapper[4746]: I1211 11:04:12.111961 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/object-auditor/0.log" Dec 11 11:04:12 crc kubenswrapper[4746]: I1211 11:04:12.163738 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/object-expirer/0.log" Dec 11 11:04:12 crc kubenswrapper[4746]: I1211 11:04:12.239842 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/object-replicator/0.log" Dec 11 11:04:12 crc kubenswrapper[4746]: I1211 11:04:12.274531 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/object-server/0.log" Dec 11 11:04:12 crc kubenswrapper[4746]: I1211 11:04:12.348761 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/object-updater/0.log" Dec 11 11:04:12 crc kubenswrapper[4746]: I1211 11:04:12.362004 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/rsync/0.log" Dec 11 11:04:12 crc kubenswrapper[4746]: I1211 11:04:12.503465 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_3df27f8b-76bd-441d-9c3a-2b8bd1f250c7/swift-recon-cron/0.log" Dec 11 11:04:12 crc kubenswrapper[4746]: I1211 11:04:12.738377 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4h4g4_c19e1748-770d-45a1-b823-77a77b6f22a4/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:12 crc kubenswrapper[4746]: I1211 11:04:12.788473 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_dac76301-3400-4177-8a19-8b97a7480321/tempest-tests-tempest-tests-runner/0.log" Dec 11 11:04:12 crc kubenswrapper[4746]: I1211 11:04:12.963086 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_c7853e46-c0f0-403f-b095-fc60d413a35f/test-operator-logs-container/0.log" Dec 11 11:04:13 crc kubenswrapper[4746]: I1211 11:04:13.057843 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mw75t_0d5a90b1-c946-4b31-9337-9b13d58f9819/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 11 11:04:22 crc kubenswrapper[4746]: I1211 11:04:22.685361 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_a99efa0a-c9ba-4a4e-9014-fe1efed47a8a/memcached/0.log" Dec 11 11:04:43 crc kubenswrapper[4746]: I1211 11:04:43.181209 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/util/0.log" Dec 11 11:04:43 crc kubenswrapper[4746]: I1211 11:04:43.431537 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/pull/0.log" Dec 11 11:04:43 crc kubenswrapper[4746]: I1211 11:04:43.456231 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/pull/0.log" Dec 11 11:04:43 crc kubenswrapper[4746]: I1211 11:04:43.634365 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/extract/0.log" Dec 11 11:04:43 crc kubenswrapper[4746]: I1211 11:04:43.977786 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-7dcw7_3b4184f0-3e35-4e70-9adc-87a4681c343c/kube-rbac-proxy/0.log" Dec 11 11:04:45 crc kubenswrapper[4746]: I1211 11:04:45.129094 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-7dcw7_3b4184f0-3e35-4e70-9adc-87a4681c343c/manager/0.log" Dec 11 11:04:45 crc kubenswrapper[4746]: I1211 11:04:45.238023 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/util/0.log" Dec 11 11:04:45 crc kubenswrapper[4746]: I1211 11:04:45.268596 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/util/0.log" Dec 11 11:04:45 crc kubenswrapper[4746]: I1211 11:04:45.290340 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_5cf100ddfed55ae9ff226d42c592f3658a4ca1e9df9a7992c88e917e4effrhm_bdce6d6c-a4ea-49a8-bef4-da390b678b24/pull/0.log" Dec 11 11:04:45 crc kubenswrapper[4746]: I1211 11:04:45.456335 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-glgvn_b0e6f3b3-a8b7-4bca-8e55-118bd35a9635/kube-rbac-proxy/0.log" Dec 11 11:04:45 crc kubenswrapper[4746]: I1211 11:04:45.565627 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-glgvn_b0e6f3b3-a8b7-4bca-8e55-118bd35a9635/manager/0.log" Dec 11 11:04:45 crc kubenswrapper[4746]: I1211 11:04:45.625442 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-bkvrk_f9f2bc47-53f4-4216-8fb2-27f2db87123e/kube-rbac-proxy/0.log" Dec 11 11:04:45 crc kubenswrapper[4746]: I1211 11:04:45.685691 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-bkvrk_f9f2bc47-53f4-4216-8fb2-27f2db87123e/manager/0.log" Dec 11 11:04:45 crc kubenswrapper[4746]: I1211 11:04:45.937271 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-5gjg9_01451fbe-7fd7-447c-b6ef-967f7ddff94b/kube-rbac-proxy/0.log" Dec 11 11:04:45 crc kubenswrapper[4746]: I1211 11:04:45.971877 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-5gjg9_01451fbe-7fd7-447c-b6ef-967f7ddff94b/manager/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.056948 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-dntwk_2d353fc2-d0c0-47ed-be04-acc87fd980a7/kube-rbac-proxy/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.167836 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-dntwk_2d353fc2-d0c0-47ed-be04-acc87fd980a7/manager/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.260774 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-7ftgl_fecc9092-bba1-4488-af41-3d970dba0968/kube-rbac-proxy/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.294162 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-7ftgl_fecc9092-bba1-4488-af41-3d970dba0968/manager/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.417871 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-n8c44_4a11cb95-3107-4526-8ab3-82bb6fd57cef/kube-rbac-proxy/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.601644 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-zstsf_b47efaee-6921-4f8b-876a-3cf52bd10a27/kube-rbac-proxy/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.622647 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-zstsf_b47efaee-6921-4f8b-876a-3cf52bd10a27/manager/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.722801 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-n8c44_4a11cb95-3107-4526-8ab3-82bb6fd57cef/manager/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.865431 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-x5ghz_9b6740eb-7439-465a-b30a-c838a4d65be6/kube-rbac-proxy/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.927362 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-x5ghz_9b6740eb-7439-465a-b30a-c838a4d65be6/manager/0.log" Dec 11 11:04:46 crc kubenswrapper[4746]: I1211 11:04:46.939797 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-8pn4t_252b923b-a265-46c1-8c3e-9ef62d5b1f7a/kube-rbac-proxy/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.030903 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-8pn4t_252b923b-a265-46c1-8c3e-9ef62d5b1f7a/manager/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.081906 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-wd9vj_4c437995-b526-4ae3-9956-b541694d54d4/kube-rbac-proxy/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.151675 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-wd9vj_4c437995-b526-4ae3-9956-b541694d54d4/manager/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.249340 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-vr9wq_efe16578-2d6a-40a9-9f8c-9b868a6d6a66/kube-rbac-proxy/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.343984 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-vr9wq_efe16578-2d6a-40a9-9f8c-9b868a6d6a66/manager/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.350723 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-5c8g4_5d8442a7-c511-4f69-b04e-45e750f27bfa/kube-rbac-proxy/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.467337 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-5c8g4_5d8442a7-c511-4f69-b04e-45e750f27bfa/manager/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.552992 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-g9qlp_9ea7dd8b-4871-43c0-a66f-113742627a6b/kube-rbac-proxy/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.591260 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-g9qlp_9ea7dd8b-4871-43c0-a66f-113742627a6b/manager/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.705910 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f8tnmf_648adc18-f046-4dcf-9a52-c69946ffa83a/manager/0.log" Dec 11 11:04:47 crc kubenswrapper[4746]: I1211 11:04:47.716412 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879f8tnmf_648adc18-f046-4dcf-9a52-c69946ffa83a/kube-rbac-proxy/0.log" Dec 11 11:04:48 crc kubenswrapper[4746]: I1211 11:04:48.081283 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-hsbv7_f80e03b9-de97-4ae0-bbcd-edc0079f20f3/registry-server/0.log" Dec 11 11:04:48 crc kubenswrapper[4746]: I1211 11:04:48.187291 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-8bb46fc5c-mr76h_a9a072f8-67ea-43f6-a43a-1f553a050f11/operator/0.log" Dec 11 11:04:48 crc kubenswrapper[4746]: I1211 11:04:48.272584 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-kklt5_798a9e32-0bc8-4231-834a-fc2b002c87aa/kube-rbac-proxy/0.log" Dec 11 11:04:48 crc kubenswrapper[4746]: I1211 11:04:48.420743 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-kklt5_798a9e32-0bc8-4231-834a-fc2b002c87aa/manager/0.log" Dec 11 11:04:48 crc kubenswrapper[4746]: I1211 11:04:48.523648 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-882l4_778c0ffc-7a48-4159-8a1b-f34a805bc1ae/kube-rbac-proxy/0.log" Dec 11 11:04:48 crc kubenswrapper[4746]: I1211 11:04:48.583590 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-882l4_778c0ffc-7a48-4159-8a1b-f34a805bc1ae/manager/0.log" Dec 11 11:04:48 crc kubenswrapper[4746]: I1211 11:04:48.751755 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-rq8z4_23f6b30a-57a8-4920-ab2e-dfebef4d9ce6/operator/0.log" Dec 11 11:04:48 crc kubenswrapper[4746]: I1211 11:04:48.779505 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-b797r_3b8201ce-fb41-4474-9609-689fe0d093ec/kube-rbac-proxy/0.log" Dec 11 11:04:48 crc kubenswrapper[4746]: I1211 11:04:48.929387 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-b797r_3b8201ce-fb41-4474-9609-689fe0d093ec/manager/0.log" Dec 11 11:04:49 crc kubenswrapper[4746]: I1211 11:04:49.031366 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-kz5f2_4890e377-1482-4341-b002-bb54e05d5ded/kube-rbac-proxy/0.log" Dec 11 11:04:49 crc kubenswrapper[4746]: I1211 11:04:49.114101 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-686fb77d86-hmhr5_5d1a162f-09fe-4a7a-854e-3236282b3189/manager/0.log" Dec 11 11:04:49 crc kubenswrapper[4746]: I1211 11:04:49.182103 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-kz5f2_4890e377-1482-4341-b002-bb54e05d5ded/manager/0.log" Dec 11 11:04:49 crc kubenswrapper[4746]: I1211 11:04:49.238886 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-zbwxc_00e181e7-8b84-49f6-96c5-4da046644469/kube-rbac-proxy/0.log" Dec 11 11:04:49 crc kubenswrapper[4746]: I1211 11:04:49.242186 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-zbwxc_00e181e7-8b84-49f6-96c5-4da046644469/manager/0.log" Dec 11 11:04:49 crc kubenswrapper[4746]: I1211 11:04:49.352141 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-sg2q4_5f3fcc59-b850-4041-84b3-9ccc788c73fc/kube-rbac-proxy/0.log" Dec 11 11:04:49 crc kubenswrapper[4746]: I1211 11:04:49.976624 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-75944c9b7-sg2q4_5f3fcc59-b850-4041-84b3-9ccc788c73fc/manager/0.log" Dec 11 11:05:11 crc kubenswrapper[4746]: I1211 11:05:11.254490 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-4t7sz_3534a1e6-5e3c-4ae5-a981-228d9ae0d5bb/control-plane-machine-set-operator/0.log" Dec 11 11:05:12 crc kubenswrapper[4746]: I1211 11:05:12.099083 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lwl94_b1196114-7a7a-4f77-951a-20d10c32d0b2/machine-api-operator/0.log" Dec 11 11:05:12 crc kubenswrapper[4746]: I1211 11:05:12.129514 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-lwl94_b1196114-7a7a-4f77-951a-20d10c32d0b2/kube-rbac-proxy/0.log" Dec 11 11:05:24 crc kubenswrapper[4746]: I1211 11:05:24.712941 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-thbcd_c5782e13-fb8a-4d0c-b0b2-9649898453d7/cert-manager-controller/0.log" Dec 11 11:05:24 crc kubenswrapper[4746]: I1211 11:05:24.828852 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-j8m6k_421d5070-53ff-451a-bc95-3b8e966afd09/cert-manager-cainjector/0.log" Dec 11 11:05:25 crc kubenswrapper[4746]: I1211 11:05:25.391212 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-hww2m_3eb397c1-a790-47d8-9b3f-93030517ef10/cert-manager-webhook/0.log" Dec 11 11:05:29 crc kubenswrapper[4746]: I1211 11:05:29.878029 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:05:29 crc kubenswrapper[4746]: I1211 11:05:29.879914 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:05:38 crc kubenswrapper[4746]: I1211 11:05:38.521588 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6ff7998486-rzgk7_7aefab1a-981b-48f1-bd2d-4f9f9f5cd49f/nmstate-console-plugin/0.log" Dec 11 11:05:38 crc kubenswrapper[4746]: I1211 11:05:38.744628 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9kf4z_54a843e2-1db9-49db-89e5-5254b7b50bab/nmstate-handler/0.log" Dec 11 11:05:38 crc kubenswrapper[4746]: I1211 11:05:38.814998 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-2sdgm_a911ba40-1cb3-4447-8f86-b03341052ae8/kube-rbac-proxy/0.log" Dec 11 11:05:38 crc kubenswrapper[4746]: I1211 11:05:38.904904 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f7f7578db-2sdgm_a911ba40-1cb3-4447-8f86-b03341052ae8/nmstate-metrics/0.log" Dec 11 11:05:39 crc kubenswrapper[4746]: I1211 11:05:39.023847 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-6769fb99d-dln6r_f3af9a18-e9fe-429b-988a-4289790515b6/nmstate-operator/0.log" Dec 11 11:05:39 crc kubenswrapper[4746]: I1211 11:05:39.107988 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-f8fb84555-vt7v4_9b209fd0-9f8c-4608-99df-7c691450b004/nmstate-webhook/0.log" Dec 11 11:05:55 crc kubenswrapper[4746]: I1211 11:05:55.841818 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-pfns9_912d8133-522c-4e88-a253-bf9be07b4d13/kube-rbac-proxy/0.log" Dec 11 11:05:55 crc kubenswrapper[4746]: I1211 11:05:55.994891 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-5bddd4b946-pfns9_912d8133-522c-4e88-a253-bf9be07b4d13/controller/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.086813 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-frr-files/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.283408 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-frr-files/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.301062 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-reloader/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.316723 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-metrics/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.389255 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-reloader/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.517745 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-frr-files/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.539523 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-reloader/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.551819 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-metrics/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.606476 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-metrics/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.785380 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-frr-files/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.800282 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-reloader/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.844879 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/cp-metrics/0.log" Dec 11 11:05:56 crc kubenswrapper[4746]: I1211 11:05:56.856768 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/controller/0.log" Dec 11 11:05:57 crc kubenswrapper[4746]: I1211 11:05:57.040362 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/frr-metrics/0.log" Dec 11 11:05:57 crc kubenswrapper[4746]: I1211 11:05:57.107440 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/kube-rbac-proxy-frr/0.log" Dec 11 11:05:57 crc kubenswrapper[4746]: I1211 11:05:57.180039 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/kube-rbac-proxy/0.log" Dec 11 11:05:57 crc kubenswrapper[4746]: I1211 11:05:57.350981 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/reloader/0.log" Dec 11 11:05:57 crc kubenswrapper[4746]: I1211 11:05:57.389608 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7784b6fcf-bs5bz_a7040058-21f6-4b31-8369-5c8c471f9cf6/frr-k8s-webhook-server/0.log" Dec 11 11:05:57 crc kubenswrapper[4746]: I1211 11:05:57.801497 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-675c7b7dd8-6mxrg_5803af05-a3ac-403a-88f6-4b7fb21678d0/manager/0.log" Dec 11 11:05:58 crc kubenswrapper[4746]: I1211 11:05:58.027283 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-797f9db975-fpbhm_5b01faa1-3b2c-448d-8285-217c6dbacc16/webhook-server/0.log" Dec 11 11:05:58 crc kubenswrapper[4746]: I1211 11:05:58.171214 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rqt4v_cc662df8-2d5c-41e8-919a-dc8d1f4d20d8/kube-rbac-proxy/0.log" Dec 11 11:05:58 crc kubenswrapper[4746]: I1211 11:05:58.597889 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-8bdwk_f7e4da6b-6da4-4ac3-87b4-ef4042a6fb96/frr/0.log" Dec 11 11:05:58 crc kubenswrapper[4746]: I1211 11:05:58.700857 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-rqt4v_cc662df8-2d5c-41e8-919a-dc8d1f4d20d8/speaker/0.log" Dec 11 11:05:59 crc kubenswrapper[4746]: I1211 11:05:59.877530 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:05:59 crc kubenswrapper[4746]: I1211 11:05:59.877596 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:06:14 crc kubenswrapper[4746]: I1211 11:06:14.748666 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/util/0.log" Dec 11 11:06:14 crc kubenswrapper[4746]: I1211 11:06:14.894286 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/util/0.log" Dec 11 11:06:14 crc kubenswrapper[4746]: I1211 11:06:14.934619 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/pull/0.log" Dec 11 11:06:14 crc kubenswrapper[4746]: I1211 11:06:14.985526 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/pull/0.log" Dec 11 11:06:15 crc kubenswrapper[4746]: I1211 11:06:15.146605 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/pull/0.log" Dec 11 11:06:15 crc kubenswrapper[4746]: I1211 11:06:15.151820 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/util/0.log" Dec 11 11:06:15 crc kubenswrapper[4746]: I1211 11:06:15.208724 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5b7fccbebf0e22d2dd769066fa7aaa90fd620c5db34f2af6c91e4319d49d9w2_6a0f0228-471c-45fd-9197-241b2ba3c70a/extract/0.log" Dec 11 11:06:15 crc kubenswrapper[4746]: I1211 11:06:15.378884 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/util/0.log" Dec 11 11:06:15 crc kubenswrapper[4746]: I1211 11:06:15.497400 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/pull/0.log" Dec 11 11:06:15 crc kubenswrapper[4746]: I1211 11:06:15.517712 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/util/0.log" Dec 11 11:06:15 crc kubenswrapper[4746]: I1211 11:06:15.570287 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/pull/0.log" Dec 11 11:06:15 crc kubenswrapper[4746]: I1211 11:06:15.931697 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/util/0.log" Dec 11 11:06:15 crc kubenswrapper[4746]: I1211 11:06:15.946722 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/pull/0.log" Dec 11 11:06:15 crc kubenswrapper[4746]: I1211 11:06:15.998169 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98085b0df3808ebec39f9f9529f737144fe2dbcdaa4f334014817c0fa8nh777_42a9ddfe-247e-4cae-ab22-03c2e0b4a494/extract/0.log" Dec 11 11:06:16 crc kubenswrapper[4746]: I1211 11:06:16.086937 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-utilities/0.log" Dec 11 11:06:16 crc kubenswrapper[4746]: I1211 11:06:16.349389 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-utilities/0.log" Dec 11 11:06:16 crc kubenswrapper[4746]: I1211 11:06:16.527933 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-content/0.log" Dec 11 11:06:16 crc kubenswrapper[4746]: I1211 11:06:16.536434 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-content/0.log" Dec 11 11:06:16 crc kubenswrapper[4746]: I1211 11:06:16.742289 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-content/0.log" Dec 11 11:06:16 crc kubenswrapper[4746]: I1211 11:06:16.780330 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/extract-utilities/0.log" Dec 11 11:06:16 crc kubenswrapper[4746]: I1211 11:06:16.984014 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-utilities/0.log" Dec 11 11:06:17 crc kubenswrapper[4746]: I1211 11:06:17.285503 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-hcwv7_164dbbfa-f646-47f0-9de6-d5f466032c15/registry-server/0.log" Dec 11 11:06:17 crc kubenswrapper[4746]: I1211 11:06:17.318699 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-utilities/0.log" Dec 11 11:06:17 crc kubenswrapper[4746]: I1211 11:06:17.342158 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-content/0.log" Dec 11 11:06:17 crc kubenswrapper[4746]: I1211 11:06:17.361574 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-content/0.log" Dec 11 11:06:17 crc kubenswrapper[4746]: I1211 11:06:17.557102 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-content/0.log" Dec 11 11:06:17 crc kubenswrapper[4746]: I1211 11:06:17.558087 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/extract-utilities/0.log" Dec 11 11:06:17 crc kubenswrapper[4746]: I1211 11:06:17.761717 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hfln2_71b25ce7-0542-4bbf-a7c7-ae760345ede3/marketplace-operator/2.log" Dec 11 11:06:17 crc kubenswrapper[4746]: I1211 11:06:17.821116 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hfln2_71b25ce7-0542-4bbf-a7c7-ae760345ede3/marketplace-operator/1.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.015608 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-utilities/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.199678 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5rmqc_50587a78-88d9-43a1-98d8-8b7941be4600/registry-server/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.233018 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-content/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.234405 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-content/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.242433 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-utilities/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.521815 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-utilities/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.522165 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/extract-content/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.526946 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-utilities/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.572966 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-kdhlr_a5a6269e-2d2d-4034-b7af-b0cf87317c98/registry-server/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.723983 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-utilities/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.724718 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-content/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.790851 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-content/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.937692 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-utilities/0.log" Dec 11 11:06:18 crc kubenswrapper[4746]: I1211 11:06:18.998149 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/extract-content/0.log" Dec 11 11:06:19 crc kubenswrapper[4746]: I1211 11:06:19.492376 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-k24k2_115e0a6a-c3eb-4931-af44-a645f0904e3e/registry-server/0.log" Dec 11 11:06:29 crc kubenswrapper[4746]: I1211 11:06:29.877510 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:06:29 crc kubenswrapper[4746]: I1211 11:06:29.878225 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:06:29 crc kubenswrapper[4746]: I1211 11:06:29.878273 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 11:06:29 crc kubenswrapper[4746]: I1211 11:06:29.879070 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed209c0846292b30a240d291d5bc22ab51a73bb55d6b151c1f8bd1bd450c7707"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 11:06:29 crc kubenswrapper[4746]: I1211 11:06:29.879131 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://ed209c0846292b30a240d291d5bc22ab51a73bb55d6b151c1f8bd1bd450c7707" gracePeriod=600 Dec 11 11:06:30 crc kubenswrapper[4746]: I1211 11:06:30.669797 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="ed209c0846292b30a240d291d5bc22ab51a73bb55d6b151c1f8bd1bd450c7707" exitCode=0 Dec 11 11:06:30 crc kubenswrapper[4746]: I1211 11:06:30.669869 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"ed209c0846292b30a240d291d5bc22ab51a73bb55d6b151c1f8bd1bd450c7707"} Dec 11 11:06:30 crc kubenswrapper[4746]: I1211 11:06:30.670896 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerStarted","Data":"d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430"} Dec 11 11:06:30 crc kubenswrapper[4746]: I1211 11:06:30.670970 4746 scope.go:117] "RemoveContainer" containerID="7fb4fdef3d15b0d468e444f11c027df566a97de863c3818013e1f2b1f12bccba" Dec 11 11:06:33 crc kubenswrapper[4746]: I1211 11:06:33.831528 4746 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-b2xc7"] Dec 11 11:06:33 crc kubenswrapper[4746]: E1211 11:06:33.832600 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerName="extract-utilities" Dec 11 11:06:33 crc kubenswrapper[4746]: I1211 11:06:33.832616 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerName="extract-utilities" Dec 11 11:06:33 crc kubenswrapper[4746]: E1211 11:06:33.832654 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerName="extract-content" Dec 11 11:06:33 crc kubenswrapper[4746]: I1211 11:06:33.832663 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerName="extract-content" Dec 11 11:06:33 crc kubenswrapper[4746]: E1211 11:06:33.832676 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf83e07-e5a0-4eb6-accd-3dd74442bb9c" containerName="container-00" Dec 11 11:06:33 crc kubenswrapper[4746]: I1211 11:06:33.832685 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf83e07-e5a0-4eb6-accd-3dd74442bb9c" containerName="container-00" Dec 11 11:06:33 crc kubenswrapper[4746]: E1211 11:06:33.832695 4746 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerName="registry-server" Dec 11 11:06:33 crc kubenswrapper[4746]: I1211 11:06:33.832702 4746 state_mem.go:107] "Deleted CPUSet assignment" podUID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerName="registry-server" Dec 11 11:06:33 crc kubenswrapper[4746]: I1211 11:06:33.832966 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="417bcb0e-7a7f-4d0f-804f-ca3a8fac6f7b" containerName="registry-server" Dec 11 11:06:33 crc kubenswrapper[4746]: I1211 11:06:33.832988 4746 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bf83e07-e5a0-4eb6-accd-3dd74442bb9c" containerName="container-00" Dec 11 11:06:33 crc kubenswrapper[4746]: I1211 11:06:33.849331 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:33 crc kubenswrapper[4746]: I1211 11:06:33.851203 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2xc7"] Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.000327 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-catalog-content\") pod \"redhat-marketplace-b2xc7\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.000985 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9qw\" (UniqueName: \"kubernetes.io/projected/1d286569-1553-4317-82cc-63ecb5a0d5c6-kube-api-access-9q9qw\") pod \"redhat-marketplace-b2xc7\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.001162 4746 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-utilities\") pod \"redhat-marketplace-b2xc7\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.104724 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-catalog-content\") pod \"redhat-marketplace-b2xc7\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.104827 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q9qw\" (UniqueName: \"kubernetes.io/projected/1d286569-1553-4317-82cc-63ecb5a0d5c6-kube-api-access-9q9qw\") pod \"redhat-marketplace-b2xc7\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.104868 4746 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-utilities\") pod \"redhat-marketplace-b2xc7\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.105290 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-catalog-content\") pod \"redhat-marketplace-b2xc7\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.105384 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-utilities\") pod \"redhat-marketplace-b2xc7\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.133776 4746 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q9qw\" (UniqueName: \"kubernetes.io/projected/1d286569-1553-4317-82cc-63ecb5a0d5c6-kube-api-access-9q9qw\") pod \"redhat-marketplace-b2xc7\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.183214 4746 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:34 crc kubenswrapper[4746]: I1211 11:06:34.759848 4746 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2xc7"] Dec 11 11:06:35 crc kubenswrapper[4746]: I1211 11:06:35.721648 4746 generic.go:334] "Generic (PLEG): container finished" podID="1d286569-1553-4317-82cc-63ecb5a0d5c6" containerID="6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254" exitCode=0 Dec 11 11:06:35 crc kubenswrapper[4746]: I1211 11:06:35.721712 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2xc7" event={"ID":"1d286569-1553-4317-82cc-63ecb5a0d5c6","Type":"ContainerDied","Data":"6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254"} Dec 11 11:06:35 crc kubenswrapper[4746]: I1211 11:06:35.723575 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2xc7" event={"ID":"1d286569-1553-4317-82cc-63ecb5a0d5c6","Type":"ContainerStarted","Data":"5a73901f4fe823f02366079837ee175bdd5bcdeab8a190b660b9aebaa6558888"} Dec 11 11:06:36 crc kubenswrapper[4746]: I1211 11:06:36.736487 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2xc7" event={"ID":"1d286569-1553-4317-82cc-63ecb5a0d5c6","Type":"ContainerStarted","Data":"273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12"} Dec 11 11:06:37 crc kubenswrapper[4746]: I1211 11:06:37.747070 4746 generic.go:334] "Generic (PLEG): container finished" podID="1d286569-1553-4317-82cc-63ecb5a0d5c6" containerID="273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12" exitCode=0 Dec 11 11:06:37 crc kubenswrapper[4746]: I1211 11:06:37.747128 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2xc7" event={"ID":"1d286569-1553-4317-82cc-63ecb5a0d5c6","Type":"ContainerDied","Data":"273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12"} Dec 11 11:06:38 crc kubenswrapper[4746]: I1211 11:06:38.758386 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2xc7" event={"ID":"1d286569-1553-4317-82cc-63ecb5a0d5c6","Type":"ContainerStarted","Data":"f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45"} Dec 11 11:06:44 crc kubenswrapper[4746]: I1211 11:06:44.184124 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:44 crc kubenswrapper[4746]: I1211 11:06:44.184795 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:44 crc kubenswrapper[4746]: I1211 11:06:44.238038 4746 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:44 crc kubenswrapper[4746]: I1211 11:06:44.264466 4746 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-b2xc7" podStartSLOduration=8.578935005 podStartE2EDuration="11.264440164s" podCreationTimestamp="2025-12-11 11:06:33 +0000 UTC" firstStartedPulling="2025-12-11 11:06:35.723716557 +0000 UTC m=+4368.583579870" lastFinishedPulling="2025-12-11 11:06:38.409221706 +0000 UTC m=+4371.269085029" observedRunningTime="2025-12-11 11:06:38.788661423 +0000 UTC m=+4371.648524746" watchObservedRunningTime="2025-12-11 11:06:44.264440164 +0000 UTC m=+4377.124303477" Dec 11 11:06:44 crc kubenswrapper[4746]: I1211 11:06:44.881030 4746 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:44 crc kubenswrapper[4746]: I1211 11:06:44.935398 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2xc7"] Dec 11 11:06:46 crc kubenswrapper[4746]: I1211 11:06:46.836694 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-b2xc7" podUID="1d286569-1553-4317-82cc-63ecb5a0d5c6" containerName="registry-server" containerID="cri-o://f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45" gracePeriod=2 Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.531762 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.690880 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-catalog-content\") pod \"1d286569-1553-4317-82cc-63ecb5a0d5c6\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.691295 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-utilities\") pod \"1d286569-1553-4317-82cc-63ecb5a0d5c6\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.691522 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q9qw\" (UniqueName: \"kubernetes.io/projected/1d286569-1553-4317-82cc-63ecb5a0d5c6-kube-api-access-9q9qw\") pod \"1d286569-1553-4317-82cc-63ecb5a0d5c6\" (UID: \"1d286569-1553-4317-82cc-63ecb5a0d5c6\") " Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.694201 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-utilities" (OuterVolumeSpecName: "utilities") pod "1d286569-1553-4317-82cc-63ecb5a0d5c6" (UID: "1d286569-1553-4317-82cc-63ecb5a0d5c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.712263 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d286569-1553-4317-82cc-63ecb5a0d5c6-kube-api-access-9q9qw" (OuterVolumeSpecName: "kube-api-access-9q9qw") pod "1d286569-1553-4317-82cc-63ecb5a0d5c6" (UID: "1d286569-1553-4317-82cc-63ecb5a0d5c6"). InnerVolumeSpecName "kube-api-access-9q9qw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.728399 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d286569-1553-4317-82cc-63ecb5a0d5c6" (UID: "1d286569-1553-4317-82cc-63ecb5a0d5c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.793872 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q9qw\" (UniqueName: \"kubernetes.io/projected/1d286569-1553-4317-82cc-63ecb5a0d5c6-kube-api-access-9q9qw\") on node \"crc\" DevicePath \"\"" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.793914 4746 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.793927 4746 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d286569-1553-4317-82cc-63ecb5a0d5c6-utilities\") on node \"crc\" DevicePath \"\"" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.871112 4746 generic.go:334] "Generic (PLEG): container finished" podID="1d286569-1553-4317-82cc-63ecb5a0d5c6" containerID="f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45" exitCode=0 Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.871161 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2xc7" event={"ID":"1d286569-1553-4317-82cc-63ecb5a0d5c6","Type":"ContainerDied","Data":"f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45"} Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.871190 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-b2xc7" event={"ID":"1d286569-1553-4317-82cc-63ecb5a0d5c6","Type":"ContainerDied","Data":"5a73901f4fe823f02366079837ee175bdd5bcdeab8a190b660b9aebaa6558888"} Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.871209 4746 scope.go:117] "RemoveContainer" containerID="f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.871316 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-b2xc7" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.896336 4746 scope.go:117] "RemoveContainer" containerID="273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.924003 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2xc7"] Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.925606 4746 scope.go:117] "RemoveContainer" containerID="6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.967581 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-b2xc7"] Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.991693 4746 scope.go:117] "RemoveContainer" containerID="f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45" Dec 11 11:06:47 crc kubenswrapper[4746]: E1211 11:06:47.992345 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45\": container with ID starting with f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45 not found: ID does not exist" containerID="f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.992400 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45"} err="failed to get container status \"f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45\": rpc error: code = NotFound desc = could not find container \"f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45\": container with ID starting with f3b704cebefa59149605f2f1a8ac62be2eb2650f427afd8b749cb5550d079d45 not found: ID does not exist" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.992434 4746 scope.go:117] "RemoveContainer" containerID="273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12" Dec 11 11:06:47 crc kubenswrapper[4746]: E1211 11:06:47.992845 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12\": container with ID starting with 273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12 not found: ID does not exist" containerID="273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.992941 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12"} err="failed to get container status \"273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12\": rpc error: code = NotFound desc = could not find container \"273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12\": container with ID starting with 273eeed30f9fe7a2393892e96ca716f24e5a3d1f0aa22d354f373aa37d36ce12 not found: ID does not exist" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.993035 4746 scope.go:117] "RemoveContainer" containerID="6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254" Dec 11 11:06:47 crc kubenswrapper[4746]: E1211 11:06:47.993428 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254\": container with ID starting with 6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254 not found: ID does not exist" containerID="6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254" Dec 11 11:06:47 crc kubenswrapper[4746]: I1211 11:06:47.993456 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254"} err="failed to get container status \"6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254\": rpc error: code = NotFound desc = could not find container \"6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254\": container with ID starting with 6a626f8cfa41a1f4e80ae06885972973f89725178defe191f86da1793788d254 not found: ID does not exist" Dec 11 11:06:49 crc kubenswrapper[4746]: I1211 11:06:49.644018 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d286569-1553-4317-82cc-63ecb5a0d5c6" path="/var/lib/kubelet/pods/1d286569-1553-4317-82cc-63ecb5a0d5c6/volumes" Dec 11 11:06:57 crc kubenswrapper[4746]: E1211 11:06:57.837013 4746 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.214:48240->38.102.83.214:33559: write tcp 38.102.83.214:48240->38.102.83.214:33559: write: broken pipe Dec 11 11:08:05 crc kubenswrapper[4746]: I1211 11:08:05.937944 4746 scope.go:117] "RemoveContainer" containerID="d409762d4b9459193a756d38757c8e17729c3d32da3dd1a63732261342d00dac" Dec 11 11:08:05 crc kubenswrapper[4746]: I1211 11:08:05.962871 4746 scope.go:117] "RemoveContainer" containerID="9a2833c0bb55345770c8c9129cb2bb24692266349b4702d22203983397e0df97" Dec 11 11:08:06 crc kubenswrapper[4746]: I1211 11:08:06.018170 4746 scope.go:117] "RemoveContainer" containerID="333cd023ae585e8d91878581f9cdde7ea769e93561beac9b84b0a43967235b60" Dec 11 11:08:14 crc kubenswrapper[4746]: I1211 11:08:14.771595 4746 generic.go:334] "Generic (PLEG): container finished" podID="bf2bf3e5-4313-4676-91af-0f30747037ca" containerID="bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44" exitCode=0 Dec 11 11:08:14 crc kubenswrapper[4746]: I1211 11:08:14.771643 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-76nst/must-gather-dxpv8" event={"ID":"bf2bf3e5-4313-4676-91af-0f30747037ca","Type":"ContainerDied","Data":"bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44"} Dec 11 11:08:14 crc kubenswrapper[4746]: I1211 11:08:14.772939 4746 scope.go:117] "RemoveContainer" containerID="bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44" Dec 11 11:08:15 crc kubenswrapper[4746]: I1211 11:08:15.333631 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-76nst_must-gather-dxpv8_bf2bf3e5-4313-4676-91af-0f30747037ca/gather/0.log" Dec 11 11:08:27 crc kubenswrapper[4746]: I1211 11:08:27.701988 4746 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-76nst/must-gather-dxpv8"] Dec 11 11:08:27 crc kubenswrapper[4746]: I1211 11:08:27.702674 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-76nst/must-gather-dxpv8" podUID="bf2bf3e5-4313-4676-91af-0f30747037ca" containerName="copy" containerID="cri-o://95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5" gracePeriod=2 Dec 11 11:08:27 crc kubenswrapper[4746]: I1211 11:08:27.717686 4746 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-76nst/must-gather-dxpv8"] Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.622973 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-76nst_must-gather-dxpv8_bf2bf3e5-4313-4676-91af-0f30747037ca/copy/0.log" Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.624157 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/must-gather-dxpv8" Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.654579 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwdg4\" (UniqueName: \"kubernetes.io/projected/bf2bf3e5-4313-4676-91af-0f30747037ca-kube-api-access-mwdg4\") pod \"bf2bf3e5-4313-4676-91af-0f30747037ca\" (UID: \"bf2bf3e5-4313-4676-91af-0f30747037ca\") " Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.654714 4746 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf2bf3e5-4313-4676-91af-0f30747037ca-must-gather-output\") pod \"bf2bf3e5-4313-4676-91af-0f30747037ca\" (UID: \"bf2bf3e5-4313-4676-91af-0f30747037ca\") " Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.661445 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf2bf3e5-4313-4676-91af-0f30747037ca-kube-api-access-mwdg4" (OuterVolumeSpecName: "kube-api-access-mwdg4") pod "bf2bf3e5-4313-4676-91af-0f30747037ca" (UID: "bf2bf3e5-4313-4676-91af-0f30747037ca"). InnerVolumeSpecName "kube-api-access-mwdg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.759238 4746 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwdg4\" (UniqueName: \"kubernetes.io/projected/bf2bf3e5-4313-4676-91af-0f30747037ca-kube-api-access-mwdg4\") on node \"crc\" DevicePath \"\"" Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.806219 4746 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf2bf3e5-4313-4676-91af-0f30747037ca-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bf2bf3e5-4313-4676-91af-0f30747037ca" (UID: "bf2bf3e5-4313-4676-91af-0f30747037ca"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.861409 4746 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bf2bf3e5-4313-4676-91af-0f30747037ca-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.900533 4746 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-76nst_must-gather-dxpv8_bf2bf3e5-4313-4676-91af-0f30747037ca/copy/0.log" Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.901235 4746 generic.go:334] "Generic (PLEG): container finished" podID="bf2bf3e5-4313-4676-91af-0f30747037ca" containerID="95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5" exitCode=143 Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.901289 4746 scope.go:117] "RemoveContainer" containerID="95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5" Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.901425 4746 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-76nst/must-gather-dxpv8" Dec 11 11:08:28 crc kubenswrapper[4746]: I1211 11:08:28.936255 4746 scope.go:117] "RemoveContainer" containerID="bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44" Dec 11 11:08:29 crc kubenswrapper[4746]: I1211 11:08:29.052142 4746 scope.go:117] "RemoveContainer" containerID="95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5" Dec 11 11:08:29 crc kubenswrapper[4746]: E1211 11:08:29.052553 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5\": container with ID starting with 95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5 not found: ID does not exist" containerID="95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5" Dec 11 11:08:29 crc kubenswrapper[4746]: I1211 11:08:29.052591 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5"} err="failed to get container status \"95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5\": rpc error: code = NotFound desc = could not find container \"95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5\": container with ID starting with 95a444e12377dcd4334d56a13a8c3ad6e3cca545b40b3aeafbaca855eb70b3b5 not found: ID does not exist" Dec 11 11:08:29 crc kubenswrapper[4746]: I1211 11:08:29.052615 4746 scope.go:117] "RemoveContainer" containerID="bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44" Dec 11 11:08:29 crc kubenswrapper[4746]: E1211 11:08:29.052970 4746 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44\": container with ID starting with bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44 not found: ID does not exist" containerID="bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44" Dec 11 11:08:29 crc kubenswrapper[4746]: I1211 11:08:29.052990 4746 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44"} err="failed to get container status \"bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44\": rpc error: code = NotFound desc = could not find container \"bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44\": container with ID starting with bb9cd277a0aca35509071a11827c297a6f0fc23a250c30a3d0fade16f066ff44 not found: ID does not exist" Dec 11 11:08:29 crc kubenswrapper[4746]: I1211 11:08:29.641448 4746 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf2bf3e5-4313-4676-91af-0f30747037ca" path="/var/lib/kubelet/pods/bf2bf3e5-4313-4676-91af-0f30747037ca/volumes" Dec 11 11:08:59 crc kubenswrapper[4746]: I1211 11:08:59.878140 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:08:59 crc kubenswrapper[4746]: I1211 11:08:59.879364 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:09:06 crc kubenswrapper[4746]: I1211 11:09:06.073656 4746 scope.go:117] "RemoveContainer" containerID="3b3866be010716c364af44f65ab1f55e58ccce6e9d6d46ddd9c8a0e810bc50be" Dec 11 11:09:06 crc kubenswrapper[4746]: I1211 11:09:06.098722 4746 scope.go:117] "RemoveContainer" containerID="03b881bec57f4c57de4f42562551c94ba2d86876a06a079b10fa37c3a885d5f8" Dec 11 11:09:06 crc kubenswrapper[4746]: I1211 11:09:06.118719 4746 scope.go:117] "RemoveContainer" containerID="72cf47075713a1b232d868f9f65733eb901d6e64615b0d08514ee955fec9230b" Dec 11 11:09:06 crc kubenswrapper[4746]: I1211 11:09:06.175234 4746 scope.go:117] "RemoveContainer" containerID="bbf24cf6e0c366d7967761a25966b284483010a1b5e160fffec2db684e732693" Dec 11 11:09:29 crc kubenswrapper[4746]: I1211 11:09:29.878168 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:09:29 crc kubenswrapper[4746]: I1211 11:09:29.878719 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:09:59 crc kubenswrapper[4746]: I1211 11:09:59.878155 4746 patch_prober.go:28] interesting pod/machine-config-daemon-mxwk6 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 11 11:09:59 crc kubenswrapper[4746]: I1211 11:09:59.878755 4746 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 11 11:09:59 crc kubenswrapper[4746]: I1211 11:09:59.878805 4746 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" Dec 11 11:09:59 crc kubenswrapper[4746]: I1211 11:09:59.879708 4746 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430"} pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 11 11:09:59 crc kubenswrapper[4746]: I1211 11:09:59.879778 4746 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerName="machine-config-daemon" containerID="cri-o://d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" gracePeriod=600 Dec 11 11:10:00 crc kubenswrapper[4746]: E1211 11:10:00.007219 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:10:00 crc kubenswrapper[4746]: I1211 11:10:00.812292 4746 generic.go:334] "Generic (PLEG): container finished" podID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" exitCode=0 Dec 11 11:10:00 crc kubenswrapper[4746]: I1211 11:10:00.812355 4746 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" event={"ID":"70a89e1d-ff2b-4918-bae1-2f79d18396e8","Type":"ContainerDied","Data":"d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430"} Dec 11 11:10:00 crc kubenswrapper[4746]: I1211 11:10:00.812791 4746 scope.go:117] "RemoveContainer" containerID="ed209c0846292b30a240d291d5bc22ab51a73bb55d6b151c1f8bd1bd450c7707" Dec 11 11:10:00 crc kubenswrapper[4746]: I1211 11:10:00.814143 4746 scope.go:117] "RemoveContainer" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" Dec 11 11:10:00 crc kubenswrapper[4746]: E1211 11:10:00.814833 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:10:06 crc kubenswrapper[4746]: I1211 11:10:06.400248 4746 scope.go:117] "RemoveContainer" containerID="2ab22580af35472a96fbf5e07c3f43487f4d709f11d8ec29c47d2e3a8f2c6886" Dec 11 11:10:13 crc kubenswrapper[4746]: I1211 11:10:13.630878 4746 scope.go:117] "RemoveContainer" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" Dec 11 11:10:13 crc kubenswrapper[4746]: E1211 11:10:13.632737 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:10:28 crc kubenswrapper[4746]: I1211 11:10:28.630731 4746 scope.go:117] "RemoveContainer" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" Dec 11 11:10:28 crc kubenswrapper[4746]: E1211 11:10:28.631532 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:10:41 crc kubenswrapper[4746]: I1211 11:10:41.631808 4746 scope.go:117] "RemoveContainer" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" Dec 11 11:10:41 crc kubenswrapper[4746]: E1211 11:10:41.632662 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:10:56 crc kubenswrapper[4746]: I1211 11:10:56.631503 4746 scope.go:117] "RemoveContainer" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" Dec 11 11:10:56 crc kubenswrapper[4746]: E1211 11:10:56.632139 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:11:08 crc kubenswrapper[4746]: I1211 11:11:08.630497 4746 scope.go:117] "RemoveContainer" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" Dec 11 11:11:08 crc kubenswrapper[4746]: E1211 11:11:08.631317 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:11:19 crc kubenswrapper[4746]: I1211 11:11:19.631384 4746 scope.go:117] "RemoveContainer" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" Dec 11 11:11:19 crc kubenswrapper[4746]: E1211 11:11:19.632156 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:11:32 crc kubenswrapper[4746]: I1211 11:11:32.630719 4746 scope.go:117] "RemoveContainer" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" Dec 11 11:11:32 crc kubenswrapper[4746]: E1211 11:11:32.631542 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:11:46 crc kubenswrapper[4746]: I1211 11:11:46.630721 4746 scope.go:117] "RemoveContainer" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" Dec 11 11:11:46 crc kubenswrapper[4746]: E1211 11:11:46.631712 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8" Dec 11 11:11:59 crc kubenswrapper[4746]: I1211 11:11:59.630669 4746 scope.go:117] "RemoveContainer" containerID="d57f95faf68d4d5b331bfd679e85be5539367cc0809c2cd7cfb69ae2b967d430" Dec 11 11:11:59 crc kubenswrapper[4746]: E1211 11:11:59.632431 4746 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-mxwk6_openshift-machine-config-operator(70a89e1d-ff2b-4918-bae1-2f79d18396e8)\"" pod="openshift-machine-config-operator/machine-config-daemon-mxwk6" podUID="70a89e1d-ff2b-4918-bae1-2f79d18396e8"